Category Archives: knowledge management

Data, data, data…

There’s certainly a lot being written at the moment about the significance of data in our lives. With the advent of advanced networks, virtualisation and cloud computing, massive (and cheap) storage etc., together with the ever increasing demands for storing large, multimedia files, we’re beginning to see a completely different perspective on data stemming from concerns such as..

  • what data do we need to store and manage?
  • how long do we need to keep it for?
  • where will it be stored?
  • what format(s) will it be stored in?
  • who can access it?
  • what about backup, support, failover etc.?
  • what can we do with it (combinations, mash-ups, visualisation etc.)?

The recent earthquakes in Christchurch have brought many of these issues sharply into focus with several schools and businesses losing access to their data when their servers were lost or damaged in buildings. This infographic showing physical storage vs. digital storage illustrates a number of the ideas and issues that we need to be thinking about in this regard

Mashable’s 5 predictions for online data in 2011 paint something of the bigger picture in this regard, illustrating why businesses – including schools – should be thinking about their data storage and management at an enterprise level, and not simply as an ‘in-house’ extra. As they say in their last prediction, “You’ll be sick of hearing about data (if you’re not already).

Of course concerns about the storage and protection of data are just one part of the picture. There’s also an enormous amount that we can do with data now, thanks to the sophisticated (and fast) processing engines available. A favourite example of mine at the moment is a video from the BBC Four’s “The Joy of Stats” that illustrates how data can be used very effectively to help visualise broader concepts, in this case 200 Countries over 200 Years in 4 Minutes from Hans Rosling, illustrating the dramatic changes that have occurred, and offering a glimpse of what the future might be like based on the extrapolation of these trends.

On an international scale there’s a move towards making all data ‘open’ and available. The controversy around Wikileaks earlier this year illustrated the great debate that is to be had about this as philosophy, but there’s plenty of evidence to suggest that there’s an up-side to making data available for wider interrogation and use. Several countries are now making data gathered by their governments (e.g census data, building consents data etc.) available for citizens to access, in the hope that as it is used and manipulated, new trends and patterns of thinking about it may emerge. Examples can be found at the US Centre for Public Education – Data First, and in Open data initiatives from England, USA and New Zealand. Schools should be considering ways of using these sites to enable students to work from authentic data sources.

When disaster strikes

A number of years ago I had the misfortune to be caught in a heavy rain shower on my way to work. Not only did the water penetrate the raincoat I was wearing, leaving me totally saturated, but it also ‘drowned’ my laptop, leading to problems occurring when I tried to start it up, resulting in the hard drive being completely unusable and nothing able to be retrieved from it. Fortunately I worked in an organisation that allowed me to send daily backups of my laptop across the network to be stored on the server. Within a few hours I was again working on a borrowed laptop, with all my files installed, minus just a few things I’d been working on the night before.

That was really my first ‘close shave’ that caused me to appreciate the absolute importance of ‘backups’! Failure to do that would have been a disaster for me!

I’m imagining that many schools and teachers in Christchurch are thinking about this after the recent earthquake. Many have either had their laptops or servers  destroyed, or have lost access to them as they lie inside condemned buildings. For them the issue of ‘disaster recovery‘ takes on new meaning – more than simply a case of whether things have been ‘backed up’ – but also a case of where those back-ups are located.

The principal from one school I spoke to is distraught because while his school had invested wisely in a complete back-up server and ensured that regular and comprehensive back-ups were made on a regular basis, the back-up server was located alongside the active server in the school, and together they lie in a condemned building in the city. Their data is undoubtedly safe, but inaccessible.

A teacher from a second school was telling me how ‘lucky’ they were that as the earthquake was happening their technician had the presence of mind to grab the back-up tapes from the office as he fled the building, and now the staff and students are able to continue operating on borrowed computers in borrowed premises accessing their files installed on a borrowed server. Certainly a case of good luck rather than good planning – they are the fortunate ones. Their tapes could so easily have been left inaccessible inside a condemned building also, leaving them in the same situation as the first school.

One of the essential elements of a good disaster recover plan is to ensure that you have off-site back-up and storage. This doesn’t simply mean that you take the back-up tapes home at the end of each day. Effective off-site back-up involves regular ‘pushing’ of data to the off-site server – this should occur at least once daily, typically overnight, but with digital data being mission critical for schools,  more frequent back-up or “continuous data protection” should be seriously considered.

This is one of the significant benefits of being connected to Ultrafast Broadband, and as schools look forward to how they can leverage their investment in UFB, the lessons learned from Christchurch should raise the concerns for a good disaster recovery plan to somewhere near the top of the list.

Visualising data

I spent this morning with the staff at Ellesmere College, just south of Chrischurch at one of their teacher only days. I was presenting some ideas about how ICTs can be used to assist in the development of thinking in our students, and referenced the ways in which we can now use technology to help visualise data. In that context I showed them this wonderful site called the History of the Australian web, which allows you to ‘see’ the development of the web in Australia from 2001. The x and y axis can be altered by clicking on the various options along each edge, and the bar across the top references the timeline. Hovering over each ‘bubble’ shows which website it represents, along with other data about that website. Double click on any of the ‘bubble’s and you can then trace their progress over the 8 year period.

What a long way we’ve come since we converted data in tables into graphs in spreadsheets. I’m kinda looking forward to seeing more of this kind of thing, particularly where it is created from real-time data as it is generated.

Measuring the right things

I’ve just read a fascinating publication from Microsoft titled “Interoperability: Improving Education” which came about as a result of 10 or so educators and ICT practitioners who were brought together by Microsoft for a meeting running alongside the annual NAACE conference held in Blackpool, England, earlier this year. The brief was to talk about the way that schools use pupil data. And the wisdoms that ensued are contained in an new Microsoft discussion document for school leaders and local authorities, “Interoperability: Improving schools” (download the PDF here).

The contents of this paper provide timely insights for NZ educators because it nicely ties together two pieces of work that are currently the focus of the Ministry of Education. First are the discussions about standardised testing, and the measurement of student progress and achievement, and second are the issues of interoperability as they relate to work being done around data interoperbility between LMSs and SMSs in schools, and the whole area of e-portfolios.

From its title, and the fact that it’s published by a technology company, you could be forgiven for thinking that the document is about a technology solution that will end all our woes. This is not the case. Instead, the document contains a summary of thoughts in response to questions such as:

  • are we collecting the right data?
  • what data should we be collecting?
  • who needs, or wishes to see the data?
  • can we easily move data to where it’s wanted or needed?

The context for the discussion is identified in this excerpt from the introduction:

The last five years has been dominated by discussions about common file formats, and competing systems which support data interchange standards. The International Standards community, through its work on file standards, has helped us reach a situation where students and teachers can easily share assignments, examination submissions and documents.

The same cannot be said for simple data interchange – for example, the simple requirement to automate the process of keeping a list of users up to date within a learning platform without a manual intervention. And those solutions which do exist for this appear to need customisation for each data relationship – between different learning platforms for example.

I believe that we are collecting the data within our educational systems that we need to deliver
improvement, but only those data items that are being seen as “part of the system”. We collect a core of formal learning data in our schools Management Information Systems, and through other systems we collect further datapoints – often disconnected from the core learning data – on health, achievement and engagement.

In the 21st Century, we are seeing a huge growth in learning and engagement outside of the formal education system. As we continue to build extensive connected learning communities, we need to find ways to see the holistic story of a truly connected learner, including their learning in school, in the community and individually. We need to move from a top-down data culture (ie we measure what the managers above us want measured) to an individually driven data culture, where the individual has more input to the data that tells their individual story, and where their past learning journey is used to support their future learning journey.

To achieve this we need to think outside of the strict confines of top-down, organisational data collection. We need to ask questions from a different perspective “How can students self-asses their skills and use that to improve their learning?” and “If we asked a student to tell us how they are doing at school, what data would they share with us?”

I am currently enjoying being a part of separate discussions (online and offline) around each of the issues identified above – but perhaps there’s good cause to reflect here and think about how timely it might be to work like this think tank, and engage in some robust discussions that actually  link both parts of the equation?

Perhaps it would allow us to reach similar conclusions as the English did as a basis for moving forward:

The answer, as it emerged in discussion, is that there‟s arguably too much emphasis on one kind of data. The current pattern of top-down accountability, it‟s suggested creates an emphasis on classroom attainment at the expense of skills and competencies. Or, as Sir Mark Grundy puts it,  “We’re measuring the wrong things.”

What’s next for newspapers?

I’m now back in NZ, getting used to the time zone differences 🙂

Over recent months I’ve read an increasing number of stories, articles and comments on the future of newspapers that I’ve been storing away to make comment on, as I see the whole debate as being indicative of the paradigm shift in the “knowledge economy” we’re all a part of. As a blogger this thinking has been percolating in my mind for some years now as i think about how I access the news, how I filter it, engage with it and report it.

The interactive map above is part of a recent initiative of the Independent newspaper in the UK, titled “what’s next for newspapers?” Prompted by the impact of the global recession on the newspaper industry, the Independent is using the opportunity to prompt a richer debate about impact of digital technologies on the newspaper industry, the implications of these changes for the newspaper industry, for journalism, and for society. The team at the Independent say that…

The aim with interactive collaborative maps of this kind is to weave together all of the salient issues, positions and arguments dispersed through the community into a single rich, transparent structure – in which each idea and argument is expressed just once – so that it’s possible to explore all perspectives quickly and gain a good sense of the scope and perceived merits of the different arguments

I see a great topic here for high school media studies students, or social studies classes for that matter. And it’s great to see the Independent actively using the debategraph tool as a means of engaging people in this debate – I’m a fan of this tool as I love the way it dynamically represents the changing perspectives in the debate, and enables large scale participation.

The Independent article refers to the thoughts of Clay Shirky, who’s post on Newspapers and thinking the unthinkable got me thinking about this a lot more just a few weeks ago. Shirky traverses the issues of ownership, control, quality, economics and impact of digital technologies in his article – focusing in on his argument that…

Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.

Not everyone agree that newspapers are under threat, however.  John Hartigan, CEO of News Limited in Australia claims that the future of newspapers is bright. He is critical of the traditional ‘knowing a little about a lot‘ approach of newspapers to reporting the news, and sees the future involving teams of highly educated people with specialist knowledge providing more in depth news and analysis. He is not a fan at all of the notion of “citizen journalists” and dismisses claims often made by bloggers that theirs is a fresh, more democratic medium, by saying “Amateur journalism trivialises and corrupts serious debate“.

If you’re looking for some perspectives and themes to fire up your students’ thinking, then I’d recommend Ryan Scholin’s post on 10 obvious things about the future of newspapers (it would also pay to read his original post from 2007 to get an idea of what has changed.)

I’d love to hear stories of classes that participate in this debate, and the usefulness of the debategraph map as a focus for this.

Making the world’s knowledge computable

A few days ago a friend of mine sent a link to Wolfram|Alpha, due to be released about now. The brain child of distinguished scientist, inventor, author, and business leader  Stephen Wolfram, Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. Wolfram is also known as the  creator of Mathematica, a powerful computational and visualisation tool upon which Wolfram|Alpha has been built.

I’ve spent a bit of time playing around with the search functionality and am impressed. While the search results at this stage tend to very US-centric in terms of the range of sources of information the potential of the tool can be seen immediately.

This is certainly one of those “watch this space” pieces of technology that will continue to grow and pave a way forward in the whole area of online search in years to come.

Watching the screencast is a good way to become familiar with what it can do.

Looking forward to 2009

Back to work today after a welcome three weeks away camping, no phone, no computer and no broadband –  it’s likely to be quite a culture shock!

Browsing through my email in-box over the weekend I came across the thoughts of a number of others who, like me, are pondering what the new year might bring…

Each year since 1985, the editors of The Futurist have selected the most thought-provoking ideas and forecasts appearing in the magazine to go into their annual Outlook report. Here is a summary of the editors’ top 10 forecasts, covering topics as diverse as the environment, energy, religion, and the nature of knowledge.

Network World’s 9 Web Sites IT pros should master in 2009 includes reference to Yammer, an application that I’ve recently started using with colleagues at work – It’s essentially Twitter for the office.

The authors of the ReadWriteWeb blog offer their 2009 Web Predictions in a series of lists – worth reading the comments to this post which has links to a number of other lists created by individuals, including Gerhard who shares his predictions for digital trends in 2009 – with a focus on the South African context.

e-Learn magazine has also published its predictions for 2009, summarising the contributions of a number of education luminaries, including Chris Dede, Jay Cross and Stephen Downes – always useful to see what these people have to say, and great to have it reported in such a concise manner.

Meanwhile, TechCrunch have announced their second annual Crunchies awards, with the categories alone well worth a browse to understand just how broad this whole field is. Always amazes me to think how many of these categories simply didn’t exist just a few years ago – a sign of how rapidly things are developing in this area.

For those interested in the tertiary space, Lev Gonick from Case Western Reserve University in Ohio has posted his top 10 trends for Higher Education in 2009. I like what he has to say as it is couched within the context of the current world economic climate and the realities faced by tertiary organisations – whilst obviously written for the US context, the issues are the same for NZ.

From HandHeldLearning is a post titled Is the 21st Century Here Yet? which contains the thoughts of an eclectic group of ICT specialists, broadcasters, educators and journalists whose predictions look at the wider context of schools and education.

For a slightly different take, check out the Agitationist‘s blog, which offers a month by month series of predictions for the web world in 2009: – a little tongue-in-cheek, but well worth a read anyway.

Wow – such a lot to digest, but interesting to see the things that keep cropping up (Facebook, Web2.0, Cloud Computing etc etc). Interesting to note the number of things already covered in CORE’s Ten Trends for 2008 – on of my tasks for this week is to finalise the CORE’s Ten Trends for 2009, with a NZ flavour, so this gives me plenty of food for thought.

Google Flu Trends


Interesting article from ReadWriteWeb about the release of Google Flu Trends that highlights the usefulness of aggregating information from search queries – in this case, relating to influenza. The idea is simple – by tracking search queries relating to influenza (eg queries about symptoms, cures, treatment etc), the team at Google.org (Google’s non-profit arm) they discovered that – after cross-referencing that data against information from the Center for Disease Control – they had the ability to predict flu outbreaks by monitoring search patterns. And the advantage of doing this…? Traditional flu surveillance systems take 1-2 weeks to collect and release surveillance data, but Google search queries can be automatically counted very quickly, making their flu estimates available each day, and thus providing an early-warning system for outbreaks of influenza. The ReadWriteWeb article has a cool animated graph that illustrates this point.

Teaching Boolean Searching

boolify_logo.jpg I came across this wonderfully easy to use search tool today after reading Jane’s Blog. Boolify provides a simple, yet effective way of introducing students to the complexities of Boolean searching.

Librarians, teachers and parents have told us how hard it is for students to understand web searching. Boolify makes it easier to for students to understand their web search by illustrating the logic of their search, and by showing them how each change to their search instantly changes their results.

It’s simple, immediate and is easy and flexible to use with your class, no matter the subject matter.

Search results are presented through Google’s “Safe Search STRICT” technology, so we’re confident that the results your students receive are safe.

While checking our the Boolify Site I also came across this video clip that explains a little of what Boolean search is all about. Useful stuff.

How private is your data?

privacy.jpg The uptake of web-based tools and applications in the Web2.0 world prompts a question in my mind from time to time – “where is all the information stored, and who has access to it?”

I thought about this again when I read Sue Water’s latest post in which she has published the results of a Twitter poll she conducted by asking her Twitter followers to name their favourite 3 Web2.0 applications (apart from Twitter, del.icio.us and Frirefox.)

I’m very interested to note the extent to which Google applications emerged in the favourites list from her poll. I’m a big fan and user of many of these myself, but recently have become aware of of Google’s reputation of being “hostile” towards users privacy.

This was brought home to me further recently a recent article in the Globe and Mail titled Patriot Act Haunts Google which highlights that the Google on-line services (Docs, Sites etc) are subject to the “USA Patriot” Act (in fact an acronym that stands for ” Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001) which could make the use of the sites they consider (a) a threat to academic freedom, or (b) in breach of Canada’s privacy laws – depending on what data is put there.

Certainly food for thought, I suspect we’ll see more debate on this emerging in the next few months.