These are my links for June 17th through July 3rd:
- The Problem and the Fix for the US Intelligence Agencies' Lessons Learned – This post is about a recent study I conducted for the Defense Intelligence Agency (DIA) to look at the Lessons Learned problems across the Intelligence Community (IC) and within DIA specifically. Although the study was conducted within an industry that has been publicly taken to task for its inability to learn from its own experience (The 9/11 Commission Report and the Commission on WMD in Iraq) the challenges these agencies face are not unlike those many other government agencies and corporations have to confront.
In this study I examine some of the causes of those difficulties and make recommendations about how the IC might make better use of what they learn from their own experience. With DIA’s permission I have excerpted both the challenges and some of the recommendations from the larger study.
- The nonsense of 'knowledge management' – Examines critically the origins and basis of 'knowledge management', its components and its development as a field of consultancy practice. Problems in the distinction between 'knowledge' and 'information' are explored, as well as Polanyi's concept of 'tacit knowing'. The concept is examined in the journal literature, the Web sites of consultancy companies, and in the presentation of business schools. The conclusion is reached that 'knowledge management' is an umbrella term for a variety of organizational activities, none of which are concerned with the management of knowledge. Those activities that are not concerned with the management of information are concerned with the management of work practices, in the expectation that changes in such areas as communication practice will enable information sharing.
- Metacrap – Metadata is "data about data" — information like keywords, page-length, title, word-count, abstract, location, SKU, ISBN, and so on. Explicit, human-generated metadata has enjoyed recent trendiness, especially in the world of XML. A typical scenario goes like this: a number of suppliers get together and agree on a metadata standard — a Document Type Definition or scheme — for a given subject area, say washing machines. They agree to a common vocabulary for describing washing machines: size, capacity, energy consumption, water consumption, price. They create machine-readable databases of their inventory, which are available in whole or part to search agents and other databases, so that a consumer can enter the parameters of the washing machine he's seeking and query multiple sites simultaneously for an exhaustive list of the available washing machines that meet his criteria.
- Column 2 : Transition strategies for Enterprise 2.0 adoption #e2conf – Lee Bryant of Headshift looked at the adoption challenges for Enterprise 2.0 technologies in companies that have grown up around a centralized model of IT, particularly for the second wave adopters required to move Enterprise 2.0 into the mainstream within an organization. He points out that we can’t afford the high-friction, high-cost model of deploying technology and processes, but need to rebalance the role of people within the enterprise.
External tools are subject to evolutionary forces and either adapt or die quickly, whereas we are forced to put up with Paleolithic-era tools inside the enterprise because it’s a captive market. 21st century enterprises, however, aren’t putting up with that: they’re going outside and getting the best possible tools for their uses on demand, rather than waiting for IT to provide a second-rate solution, months or years later.
- Seth Godin's Web 2.0 Traffic Watch List on Statsaholic.com – There are literally thousands of "web 2.0" companies, and until now, there's been no easy way to compare which ones are getting traffic. The list of 952 sites below was inspired by the list started by Bob Stumpel and then added to by many others