The Impact of Social Media on Traditional Knowledge Management

December 16, 2015 - 1:12 pm

book-168824_1280Successfully implementing knowledge management, which is broadly defined as the identification, retention, effective use, and retirement of institutional insight, has been an elusive goal for most organizations. Some of the smartest people I have worked with have been frustrated by their efforts, not through lack of trying or ability but by the inherent challenges it presents. Now the emergence and impact of social media and the way it democratizes the creation and use of knowledge in the enterprise is forcing us to rethink our assumptions.

To understand and discuss the challenges of the traditional approaches to knowledge management, I’ve categorized them into two simple buckets: behavioral and technical.

1. Behavioral

In order for a Knowledge Management System (KMS) to have value, employees must enter new insight on a regular basis and they must keep it current. Out-of-date information has limited use beyond being of historic value. Seldom are either of these behaviors adequately incentivized. In fact, by being asked to share their tacit knowledge, many employees believe they are reducing their own value to the organization. In addition, updating the information requires real effort, which is rarely a priority against the core responsibilities of the employee. Even organizations that have dedicated resources for managing knowledge struggle to keep it current and to enforce adherence to their single source of truth.

2. Technical

If you want to find out something about your organization, say, the revenue of the business, it’s often easier to use a popular search engine than to use your own internal knowledge system. Try this yourself.

It’s remarkably difficult to organize information in the right manner, make it searchable, and then present it so the most relevant responses are at the top of the search results. Organizational information is hardly the example of pristine structure. While public search engines use algorithms such as counting the number of web pages that link to other web pages (a good measure of popularity) to function, internal systems have no such equivalent. Unstructured content is the king of the public web, whereas it is the bane of the enterprise.

The situation is compounded when employees are disillusioned by the effectiveness and effort to use the KMS and resort to old habits, like asking colleagues, improvising, or relying on non-official sources. The system often fails to be widely adopted—at best it is used by a small proportion of the organization—and no amount of effort is enough to see success scale.

Enter Social Media: The Changemaker

It may be time for you to rethink knowledge management in your organization. Social media, a disruptive phenomenon particularly in the enterprise, has the potential to completely disrupt traditional knowledge management systems.

In the old world order, knowledge was typically created and stored as a point in time. In the future, organizational policy or insight is less likely to be formed by an individual creating a document that goes through an approval process and is ultimately published. No, it will more likely begin with an online conversation and it will be forever evolving as more people contribute and circumstances change.

Social media takes knowledge and makes it highly iterative. It creates content as a social object. That is, content is no longer a point in time, but something that is part of a social interaction, such as a discussion. We’ve all seen how content in a micro-blogging service can shift meaning as a discussion unfolds.

The shift to the adoption of enterprise social computing, greatly influenced by consumerization, points to an important emergent observation: the future of knowledge management is about managing unstructured content.

Let’s consider the magnitude of this for a moment. Years of effort, best practices, and technologies for supporting organizational content in the form of curated, structured insight may be over. The redo is an enormous challenge, but it may in fact be the best thing that has ever happened to knowledge management.

A Silver Lining

In the long run, social media in the enterprise will likely be a boon for knowledge management. It should mean that many of the benefits we experience in the consumer web space—effective searching, grouping of associated unstructured data sources, and ranking of relevance—will become basic features of enterprise solutions.

It’s also likely we’ll see the increasing overlap between public and private data to enhance the value of the private data. For example: want to know more about a staff member? Internal corporate information will include role, start date, department etc., but we’ll now get additional information pulled in from social networks, such as hobbies, photos (yikes!) or previous employment. Pull up client data and you’ll get the information keyed in by other employees, but you might also get the history and values of the company, competitors, and a list of executives, gleaned from the broader repository of the public web. I’ll leave the conversation about privacy for another day.

It’s likely that social media-driven knowledge management will require much less of the “management” component. Historically we’ve spent far too much time cleaning up the data, validating, and categorizing it. In the future, more of our time and our systems will be used to analyze all the new knowledge that is being created through our social interactions. The crowd will decide what is current and useful.

Of course, formality will not entirely fade away. There will still be a role for rigor. Laws, regulations, policies, training documentation, and other highly formal content will require it. But it will live alongside and be highly influenced by social computing.

No doubt knowledge management is an enormously complex space and the impact of social media magnifies the challenges. However, the time is right to evaluate your knowledge management strategy. It may be time to begin anew.

Share

VIDEO: My Vision for Technology Innovation in Local Government

May 25, 2013 - 12:43 pm

The City of Palo Alto is creating social and mobile communities, and collaborating with citizens, volunteers, employees, partners and other agencies to change the way government is delivered.

Share

Does social computing have a role in government?

April 5, 2013 - 2:04 am

As of early 2013, there are over a billion active monthly users of Facebook and almost 700 million daily users. People from across the world use this social network to share and exchange stories, pictures, ideas, and more. These numbers suggest a compelling platform that is engaging humanity in a manner without precedent. Facebook and its competitors have convincingly demonstrated that people will share and collaborate with each other, and with strangers, in an inclusive manner not just for fun, but to make things happen. And yet, when most of the working population of those users goes to their places of employment, they use technologies that reinforce barriers to collaboration. Email—albeit an important business technology—primarily facilitates sequential and non-inclusive collaboration. Up until recently, the merits of social networking has had the hardest time successfully penetrating the enterprise.

The tide is turning. Today, an increasing number of organizations are exploring, experimenting, and deploying social collaboration tools. They are becoming social enterprises. Why the change? It may be because of better solutions or more leadership support or greater recognition of its potential value. Perhaps it is a mix of all these things and more. But without a doubt, workers rising through the ranks today are more apt to try social networking in the enterprise. They have already accepted and blended the use of technology in their work and home lives.

With enterprise collaboration in the private sector just entering the early majority phase of the technology adoption curve, where does that leave the public sector? As one would imagine, there are only a few innovative agencies that have taken the leap to build a social enterprise. Most notably the City of Boston and the State of Colorado have been pushing the envelope. With the recent deployment of a social collaboration platform citywide, we can now count the City of Palo Alto as one of those innovators.

City Manager Creates the Business Case
It all began in March 2012, when City Manager Jim Keene approached me and asked if there was a way for him to more easily communicate and engage with all City staff. There’s nothing I like better than a tough challenge to solve and this one met that criteria. We sought to keep costs as low as possible and at the same time to try to find a solution that was truly cutting edge. I also wanted to go beyond the original request and find a way not just for the City Manager to engage with staff, but for all staff to be able to connect more easily with each other, to share ideas and documents, and to solve problems together. If we could do that and more, we could create an agency that would have improved access to timely information and a platform to solve problems more quickly. This, we surmised, would have a very positive impact on the services we provided to the community.

At the City of Palo Alto we have the responsibility and privilege to be a role model in how a public agency should use innovative technology to serve a community. As an example, last year we deployed an award-winning open data service. We didn’t just repeat the work of others; we applied new ideas and innovation to our solution. We know other agencies look closely at what we do as guidance for them. So rather than going down the well-trodden path, we often want to chart new territory. Of course, this strategy has implicit risks, but we don’t apply it to everything. It’s a deliberate and balanced approach for the right, qualifying projects.

The City Begins an Experiment
In April 2012, after extensive research, and the application of some of my own personal work experience in this area, my team and I decided to deploy a small, short-term experiment with Salesforce Chatter. While several impressive solutions met the basic requirements, Chatter was compelling because it was closely modeled after Facebook. It was also exceptionally low cost, and being a software-as-a-Service (SaaS) solution, it was easy to deploy.

My expectation was that we would run the experiment for about three months and we would have around 50 early adopters. To my surprise many more people wanted to try it out. So the number of users grew very quickly. I then learned something important. With only a few months to try it, most users found this a deterrent since it seemed that it wasn’t worth the effort for the short amount of time it would be available. So I quickly made the decision to extend the experiment through the fall. In no time at all we had over a hundred users. Most were just viewers, with just a handful of staff brave enough to post items. Staff posted pictures and created groups. One particular group was used as a place to encourage members to eat healthier and do more exercise. Another group was for sharing tips and tricks for smartphones and tablet computers. These were not earth-shattering collaborations, but they showed the promise of collaboration in a manner that previously did not exist.

I personally committed to posting and commenting—common features of a social network—and I encouraged the IT team to follow my lead. By the time we were reaching the end of the experimental period, it just seemed too premature to end it. More people were joining and there were good discussions about social networking happening at the organization. In late fall of 2012, we had over 200 people signed up. Again I decided to extend the experiment until early in the New Year. At that point, we committed to making a decision whether to deploy it citywide or shut it down.

The Decision to Deploy Citywide
When my team and I reviewed the metrics in early 2013, the data was not overwhelmingly conclusive, but it was sufficiently persuasive to make a decision. In conjunction with our City leadership team and the City Manager, it was agreed to deploy Chatter citywide for a period of up to 18-months.

On March 1, 2013, almost a year after we started to think about social collaboration at the City, we invited all staff to participate. So far, so good. Lots of curiosity and great questions. It’s far too early to know if we are on a course to change the nature of work at the City. We’ll gather that evidence over the medium-term. But we’re doing things differently and opening our minds to a whole new world. We don’t want to play catch-up, we want to lead. It’s beginning to be clear who gets it and who is still trying to figure it out. Of course, we have plenty of people who don’t get it at all and are not shy to share their view that it doesn’t seem to offer them any value. But isn’t that one of the greatest challenges of innovation? Those of us tasked with anticipating a possible future, even when we have little idea what that future will bring must push forward with our ideas despite enormous pressure from the naysayers and antagonists. If there is success, everyone wins. If there is failure, we learn something and then we apply those lessons as we move forward with other innovative experimentation.

In 2004, nobody thought about Facebook. Nobody knew they would want it or even what value it could have in their lives. Less than 10 years later, Facebook–a social network–is one of the most remarkable phenomena of our time and a billion people discovered a new, fun and productive way to interact together.

Can we do the same at City Hall?

Share

@Reichental is #3 on List of 50 Most Social CIOs on Twitter Worldwide

January 17, 2013 - 2:40 am

Posted in Huffington Post on January 16, 2013.

Click Here To Read

 

Share

Google Plus defines an era of disruption at a moment’s notice

August 4, 2011 - 9:00 am

Whether or not you like Google+ or have yet to try it, its introduction continues the important role that a battle of ideas has in shaking-up and bringing new value to the marketplace. In the best outcome, robust competition in any business domain should have at least one benefactor: you, the consumer.

Google+ raises the stakes in the social computing space. With so many people and organizations already invested in other social platforms, Google+ is a manageable gamble with the potential for considerable consequence. Yet for the leading social media incumbents the risk may be existential. Fending off this kind of threat will likely require drastic and prompt measures.

When the entrant yields this much power in an existing market and elicits as a response the potential for rapid innovation, this is what I am calling the “G+ effect.”

The G+ effect is best defined by the introduction of Google+, but it’s not unique to Google; it is unique to our times.

What is the G+ effect?

The disruptive impact of introducing a new product or service is obviously nothing novel. What is new and profound is that the viral and light-speed distribution of digital information and capability across our connected planet can threaten existing businesses at a moment’s notice. The entrant doesn’t even have to be game-changing, but the outcome can be. The influence and reach of the provider can result in disproportional results from just incremental innovation (even whether or not the product succeeds). It is the innovator’s dilemma in overdrive.

The torrent of punditry that accompanies these introductions is notable alone. We are also seeing a significant intensification in rampant speculation prior to a release that can unsettle a market.

Of course, being incremental initially doesn’t rule out disruptive later. For example, in the case of Google+, what it becomes in the months ahead and what it may enable could certainly be game-changing. It’s far too soon to tell.

It would be easy to conclude that the G+ effect is a destructive phenomenon. Sure, there is something to be said for the uncertainty it can sow, and honestly it is impossible to know quite where it will take us. There is no doubt that existing business players will be challenged in unprecedented ways and some customers may be riled by the constant volatility. I also have to believe that at some point every one of us has a capped quotient for fickleness. But I argue that, at least in the short-term, a dynamic battle of ideas will remain a positive force.

At its core, the G+ effect is an economic phenomenon. Clearly there is an important technical component, but introducing a new product or service that can have rapid and far reaching impact, first and foremost shifts existing market behavior — even if temporary in nature. In some instances, for publicly listed companies, the business introducing the technology may experience a bump in stock value (as we have seen with Google+) and its competitors may see theirs experience downward pressure.

The G+ effect and Google+

Despite only a limited early release, in just a few weeks Google+ has garnered over 20 million participants from across the world. If every one of them had only written the words “Hello World” in the status box, that itself would have been a notable event. Instead, billions of words were added with their attendant photos and videos. Pundits are already claiming the imminent demise of its competition; products that worked hard over several years to earn each subscriber, friend, and follower. (In my view, any notion of the competition’s obsolescence is far too premature).

With Facebook taking a significant lead in social networking, it has emerged to occupy a monopolistic position. A sudden injection of viable competition is a great catalyst for innovation. It is one thing for customers to complain about the limits of Facebook Groups and privacy and quite another for Facebook to respond to the potential competitive advantage that Circles in Google+ create.

Let me be clear, this isn’t a battle just between Google and Facebook — although it could be argued that it will be the early nexus of the action — no, the impact may be felt across the communication, collaboration, and sharing space.

The only good monopoly is the board game

In a topical and recent case study, for much of its young history commissions have sought to stop Microsoft from garnering a monopoly position. Microsoft’s huge footprint in the operating system market enabled it to exploit that position. Look at the innovation of Internet browsers and you have the tell-tale signs of stifled innovation as a result of market domination (remember the first browser war?). It was only when there was a viable alternative, mostly in the form of Firefox and most recently with Chrome and Safari, that we have seen an uptick in browser innovation. (Credit also goes to the various communities that work hard for standards ratification).

Had competition been more rigorous in the early days of the browser, would we be further along with web-based capabilities today?

Currently we see dynamic and healthy competition in the domain of smartphones. But it is also a fragile battle. Now largely dominated by Android and iPhone — solutions created by organizations with extremely healthy balance sheets — innovation is alive and kicking. But should one stumble, a dominant player could emerge and we could see innovation atrophy. Sure, it is speculative and there are plenty of participants trying their darnedest to play catch-up. In fact, with the average American replacing his cellphone every 21 months (source: Recon Analytics), this industry is a prime candidate for the G+ effect.

The G+ effect and the future

What the G+ effect might mean for businesses and consumers over the long-term has yet to be determined. Fortunately we can rely on the marketplace to help sort out what happens next.

At least in the short-term, as an IT leader I encourage rigorous innovation and competition as it helps to keep product and service costs low and accelerates the introduction of desired functions. I also want this innovation to restrict the ability for large corporations to create a closed web or to reduce the very freedoms that make it so empowering.

But with this level of innovation, I’m also concerned by the change costs both in dollars and those that manifest in user fatigue. It could also exacerbate the problems associated with playing catch-up.

For sure, the G+ effect has the capacity to elicit considerable change in the way many organizations operate and compete. Getting a head start on figuring it out might enable many to pursue the emerging opportunities.

Share

It’s Time for IT to Ask More of the Right Questions

March 16, 2011 - 9:00 am

Today, the IT department is often a victim of its success. With technology increasingly at the center of business initiatives, there is an insatiable demand for services. And while most IT professionals come to work each day to be productive and add value, more often than not, it’s an uphill battle to keep internal customers happy. Working either harder or smarter hasn’t necessarily produced the customer satisfaction dividend anticipated. Moreover, it has served to increase expectations of what can be provided and it has continued to raise the bar for IT.

Typically, IT will deliver the right thing at the right time (as long as there is leadership support and good requirements), but it can be painful getting there. Internal customers will be happy to get their solution, but they might not be happy in the manner it was done. It’s a perception issue. IT is too often judged almost exclusively on how something was produced rather than what was delivered.

Should IT be chasing kudos or trying to get the job done right?

In the service business, success is often measured by having happy customers. In the marketplace, happy customers are repeat customers. Organizations with internal service departments are not usually subject to these types of competitive pressures. Sure, cost must be managed otherwise a service may be better performed outside the business. But even where cost is higher, organizations continue to enjoy the benefits and pay the premium of keeping many services internal. For example, they can exert maximum control and are not subject to continued contractual interpretations and disputes. With that said, if you’re a captive cost-center, quality customer service has to be driven by something else such as culture, incentives, or vision. In other words: it’s a choice.

If an IT team is delivering quality services and products but still not meeting, say, the speed of service expected, that might be an acceptable trade-off. In other businesses, quality may suffer in place of speed. In project management, there is a maxim known as the triple constraint. That is, changing one of the following: speed, cost, and scope usually results in an impact to the others. In service delivery, the triple constraint is often quality, speed, and customer satisfaction (underlying these is a fourth, the inadequately addressed component of risk.)

It’s a worthy goal to be both a world-class customer service provider and a producer of high quality products and services. It’s possible to manage the service triple constraint without too many trade-offs. But to be that organization requires an important operating principle: IT must rarely be the arbiter of priorities. That role must live squarely outside of IT.

Changing IT from an organization of “no” into an organization of “go”

I’ve seen it repeated throughout my 20-year IT career: internal customers come to the IT team with a need and it’s IT who says it can’t be done. Customers get frustrated and they have a poor view of the IT team. Usually they are saying “no” because of a capacity issue rather than a technical limitation. When IT says no to a customer, what they’re really saying is that something else is more important. That’s IT being an arbiter of priorities.

Yes, it goes back to IT governance, something I’ve discussed as being absolutely essential to business success.

But while IT governance can work as a process at the leadership level, it will fail when the IT team doesn’t have the understanding and the language of the process to support it as it manifests downstream.

When confronted with a priority decision, an IT staffer needs to move arbitration back to the business.

The staffer typically wants to know what to do, not whether they should do it.

Therefore, you must transition your staff from saying “no” to asking questions about priority and capacity. It certainly can be the case that more than one request has priority. If so, it’s now a question of investment. Spend more and you’ll get more resources.

Bottom line: these decisions are made by the business, not by the IT staffer who’s just trying to do the right thing.

Internal end-users and IT may never have a love affair, but if roles are better defined and understood, all parties will be less frustrated, have greater empathy for where they are coming from, and customer satisfaction will be firmly focused on the quality of the product or service being provided.

Share

The 2010 technology of the year is …

December 21, 2010 - 9:00 am

While Facebook and the iPad garnered considerable attention this year — and rightly so — it is the free micro-blogging service Twitter that gets my 2010 accolade for the most important technology product of the year.

Now with more than 175 million subscribers, an estimated dollar value that is double that of the New York Times, and 25 billion tweets this year alone, Twitter is becoming a formidable disrupter in multiple domains, including media and the enterprise.

In June of this year, responding to several of my friends and colleagues who were simply confounded by the merit of Twitter, I posted my first blog on the topic. Looking back now, six months later, I see that even I significantly underestimated the value of the service.

Why Twitter?

Twitter finally meets the two essential criteria for business success:

1. Is there a viable revenue model?

To that I say a resounding yes! This year, Twitter began the rollout of their suite of promotion features. A form of advertising, Twitter promotions call out sponsored hashtags and help to serve up associated tweets. As Evan Williams (Twitter co-founder) pointed out at the Web 2.0 Summit in November, a considerable challenge right now is managing the excessive demand by brands to have their products and services promoted. He also pointed out that there are many more ways to monetize Twitter that are in the works.

2. Does the service have sustainable utility for its users?

Once again, Twitter has proven this to be the case over and over again. I’ll spend the remainder of this post exploring this point.

Twitter as a communications tool

There are few websites or TV commercials now that don’t adorn themselves with the Facebook and Twitter logos. These services are quickly becoming the new destinations or originating points for people interested in learning more about products and services. Twitter, with its small footprint and timeliness advantage, has the ability to uniquely reach and drive sales to a global audience. For a broader set of marketers such as politicians, governments, entertainers, charities, media outlets, and non-governmental agencies, the service provides a new and valuable channel to spread a message.

I personally use Twitter to communicate my ideas and to highlight items of interest to my followers. I also enjoy reading tweets from those I follow that are both informative and entertaining (side note: like many of you, I’ve completely dropped the use of RSS for pushed content as a result of Twitter). It’s also a knowledge discovery tool for me (more on that later below).

The usage of Twitter during the Iranian presidential protests in 2009 hints at the promise of a frictionless channel that rides above the limits of traditional communication tools.

Twitter as a disrupter of existing media

If you’ve had the chance to play with Flipboard for the iPad, it’s clear to see that pulling in a Twitter stream illuminates the real-time zeitgeist in ways never possible before. It presents person-specific interests and provides options for content, such as video, that can be explored further if desired.

Too often we take an existing media and simply present the same content in a different digital context. Great innovation uses digitization for reinvention. For example, we shouldn’t simply bring TV to the Internet; it should be different and use the unique capabilities of digitization to make it even more compelling. In Twitter, for example, the ability to serve up news in small chunks from a plethora of pundits results in the reinvention of news distribution. That’s neat.

Twitter as a competitor to Facebook and Google

The September facelift of Twitter on the web, which included inline video and photographs, was suggestive of what may lie ahead. Rather than being limited to basic micro-blogging capability, the revised Twitter is a compelling place to share media and send and receive direct messages. Improved mobile accessibility and usability extend these capabilities beyond the desktop, too.

Twitter has become a destination to discover and find things. Some of that is by push (e.g. you follow a link someone shared), but increasingly it offers benefits in pull (e.g. you do a search for something). While the demise of Google search is not imminent, Twitter is a search paradigm disrupter that can’t be ignored.

Twitter is natively a social network. It easily connects people and interests. Once again, while not a Facebook killer yet, a few additional features would align it against the core value-propositions of Facebook, but in a decidedly — and potentially — more compelling manner.

One can easily deduce why both Google and Facebook have been vying to acquire Twitter.


Of course it’s not all perfect. Twitter has a lot of work to do. They continue to have service outage issues when utilization spikes. A symptom of success no doubt, but an excuse that is long past its free-pass status. In the same interview cited earlier, Evan Williams spoke about the need — which they are working on — to have more meaningful or relevant tweets somehow rise above other less valuable content. One survey found that 40 percent of tweets are “pointless babble.” That’s a lot of noise if you’re trying to get real value from the service.

Fundamentally Twitter is important because it takes traditional concepts such as marketing and messaging and forces us to rethink them. Its API enables powerful data analysis of trends and discovery of patterns. It has spawned an ecosystem of more than 300,000 integrated apps that extend its capabilities. It’s even sparked a healthy amount of copycats, both in the consumer space (e.g. Ident.ca and Plurk), and in the enterprise (e.g. Yammer and Socialcast).

I recognize Twitter as my 2010 technology product of the year for many of the reasons above, but specifically it is because of its potential. If the company makes a few smart decisions over the next few months and beyond, Twitter has the power to be profoundly important in many areas of our lives.

Share

Foursquare is a smart game changer

June 21, 2010 - 11:27 pm

Deciding to share ones location using a mobile device is not a new phenomenon.  Several geolocation applications, described as those that use system location awareness as its core function, have served this market for some time. For the most part, the value here is that of forced serendipity.  For example:  if you’re my friend and you decide to share your location and I am in the area, perhaps I can drop by and have a chat. In many cities this is how groups of friends are assembling. Not through a process of phone calls and lengthy coordination—how old school!—no, today for many it’s about meeting up by displaying your location and hoping you are discovered or by you discovering the location of others. Clearly it’s not for everyone and subsequently, to date it has had a relatively niche following.

While many vendors have entered this market and have had some limited success (measured through usage and not revenue, mind you) a new entrant to this space has provided an unexpected boost and it may just be their unique features that propel geolocation applications into the pseudo-mainstream.

Recently Foursquare (http://www.foursquare.com), a geolocation startup, has entered the market with the same basic two features available from the incumbents: the ability to declare ones location (called checking in) and to view the location of others. But here’s the twist. Foursquare has made it a game. And it’s strangely addictive. Declaring ones location can result in two types of prizes. First, for specific behaviors you get badges (feels a little like boy scouts and I think that isn’t a coincidence). These behaviors include checking in to a location late at night or declaring your location in two different cities on the same day.  The badges show up on a website and are visible to others including through their mobile device.  The more badges you collect, the more bragging rights you get (like I said, it’s not for everyone). The second type of prize relates to the frequency of checking in to the same location. The person who has checked in the most to that location becomes its mayor.  Generally there are no privileges to that designation other than the fact you are the mayor.

With me so far?

I don’t want to dwell on whether there is any merit to what I’ve described here so far.  I’ll let you be the judge of that.  But here is where I think Foursquare and its gaming approach gets interesting: they can actually make money from this!

People who sell things love customer loyalty and for good reason. Loyal customers are cheaper to manage than acquiring new customers. So someone who frequents, say, the same coffeehouse would be considered a loyal customer. Visit the store enough and with Foursquare the person can become the mayor.  Now the person has loyalty credentials which may be valuable to the owner of the coffeehouse. For example, why not reward that person for their loyalty and further reinforce that loyalty? Organizations do this all the time. All the individual needs to do is show the coffeehouse owner the mayor designation on their mobile device. But let’s take this a little further. If frequency can lead to perks, then there is the possibility to generate more business as individuals try to earn mayor status. Or the owner may just reward the individual for simply checking in, thus providing additional incentive to visit the store in the first place.

So what’s my take on this? It’s a win-win. Foursquare makes money by becoming a marketing vehicle for businesses (and uses a smart business model not based entirely on ad revenue) and the individual attains rewards for simply checking into a location.  I’m not sure that I will be a power user of Foursquare, but I like this approach.  In social computing, monetization has been elusive and these guys have put their finger on something really smart. And that’s a game changer. Let’s see what happens.

Share

To Tweet or not to Tweet?

June 16, 2010 - 8:00 am

Since it started in late 2006, I’ve been a registered member of Twitter—the popular 140 character-limited microblogging service. However, I’ve only recently started to use it on a regular basis. I’ve suddenly found it quite useful. Many of the folks I socialize with are confounded by its value; they cannot see why people post the detail of their most inane activities and they are equally baffled by those that read the postings. I do neither of these things and yet I am able to derive value from it. To understand how, I thought it would be worthwhile to briefly outline the reasons why I think it is a rather compelling service.

While 40% of Twitter content is considered pointless babble according to this study: http://bit.ly/9K4RNE, the same report cited the combination of news, conversation, and pass-along value as approximately 40%. It is likely that the content of my interest falls into news and pass-along value, which appears to be around 12%. It is a small proportion of the content, but when you consider that tweets—the messages posted in Twitter—number around 50 million per day; suddenly 12% is a very large number. I don’t read a lot of what is on Twitter and frankly, I’m just not interested in most of it. However, the small group of people and organizations I follow disproportionally represent the 12% of content I am interested in.

So why would I read content posted on Twitter? There are three main categories that currently appeal to me. First is information that is newsworthy or timely. I find Twitter to be quite effective around alerting me to items of interest in a succinct manner. Typically a tweet will include a link for more information, but by scanning the initial tweet I can quickly determine whether following the link is worthwhile. The second category is getting a novel perspective from someone whose opinion I respect. The last category is pure entertainment. I follow a number of comedians and I find their random musings pleasantly diverting. Clearly there are many other reasons beyond my own why people read Twitter which include education, notifications, surveys, and fund-raising.

The final item I want to address is why someone like me would write a tweet (remember I’ve only been discussing why people read tweets so far). I think it’s highly flattering when someone decides to follow me on Twitter. My assumption is that they have made this choice because they believe I may repost other Tweets of value or I might have something of value to say myself. Without a scientific analysis, I would say the mix of my Tweets is 66/33 respectively. And while I don’t have millions of followers, I am always humbled when someone retweets (forwards) an original post of mine.

It’s too early to tell whether Twitter has staying power. But it’s evident that it is more than a novelty. Its current subscriber growth rate is spectacular, while its future is far from certain.

You can follow me at http://www.twitter.com/reichental

Share

Slides: Whispers and shouts across the world

October 1, 2009 - 8:00 am
Share