Learning analytics and business intelligence were major themes at this year’s EDUCAUSE conference. I attended a couple of sessions to see how other institutions are using these technologies and to learn more about these areas for myself. Our learning technologists are all engaged with these ideas but I personally didn’t know much about it so this was a good opportunity to learn something about it.
John Doove gave a very interesting overview of several small projects funded by SURF, the Dutch equivalent of JISC. These were deliberately experimental, aimed at bringing people together to see what they could do. The results were both varied and neat and will form the basis for further work. This gave me a good overview of the sort of issues that learning analytics can investigate:
The other learning analytics presentation was on an altogether different scale. Three institutions collaborated to build an aggregated data set, containing over 600,000 student-level records, that they could use to analyse and compare outcomes. The talk didn’t really go into details of what they learnt from the data, focussing more on the process of building the data set in the first place.
On the business intelligence side of things, Henry Childers of the University of Arizona gave a thorough and illuminating presentation about their major BI intiative. He showed the amount of effort that had gone into their system and the organisation they used. Their approach was to start with operational data on an area-by-area basis (e.g. student data, finance, research, estates) and then gradually introduce management data and then aim for strategic data. They have a common data governance group, which sounds similar to our own Applications Architecture Governance Group.
I was particularly interested by one of the lessons Henry listed, that a specification-based approach did not work well for designing reports and that they needed a much more iterative and responsive approach. I think this matches our experience too and we should probably adapt the way we develop reports.
There were many other presentations on these issues, particularly on learning analytics. Reports I heard from other delegates suggested that the quality of presentations was rather mixed, so I’m glad that I found at least one good talk on each topic.
John Doove gave a very interesting overview of several small projects funded by SURF, the Dutch equivalent of JISC. These were deliberately experimental, aimed at bringing people together to see what they could do. The results were both varied and neat and will form the basis for further work. This gave me a good overview of the sort of issues that learning analytics can investigate:
- Project 1 gained insight in the use of Learning objects (e.g. short instructional videos), showing that use of certain materials correlated to successful student outcomes.
- Project 2 compared course evaluation by students with student performance data.
- Project 3 researched how students used the LMS. They found that short student sessions corresponded to the successful students, which goes against the evidence from other experiments. So they investigated this and found problems in the UI were holding up some students and interfering with their learning.
- Project 4 investigated when to roll out a component of the LMS tool in use that everyone already bought but doesn’t use.
- Project 5 used text mining of student notes in digital textbooks. E.g repeated occurrence of the word “understand” could indicate students not understanding a certain area.
- Project 6 prototyped a dashboard for students, showing online behaviour in Blackboard.
- Project 7 investigated learning paths through the curriculum and how these affect student performance. Now developing into a recommender service for students.
The other learning analytics presentation was on an altogether different scale. Three institutions collaborated to build an aggregated data set, containing over 600,000 student-level records, that they could use to analyse and compare outcomes. The talk didn’t really go into details of what they learnt from the data, focussing more on the process of building the data set in the first place.
On the business intelligence side of things, Henry Childers of the University of Arizona gave a thorough and illuminating presentation about their major BI intiative. He showed the amount of effort that had gone into their system and the organisation they used. Their approach was to start with operational data on an area-by-area basis (e.g. student data, finance, research, estates) and then gradually introduce management data and then aim for strategic data. They have a common data governance group, which sounds similar to our own Applications Architecture Governance Group.
I was particularly interested by one of the lessons Henry listed, that a specification-based approach did not work well for designing reports and that they needed a much more iterative and responsive approach. I think this matches our experience too and we should probably adapt the way we develop reports.
There were many other presentations on these issues, particularly on learning analytics. Reports I heard from other delegates suggested that the quality of presentations was rather mixed, so I’m glad that I found at least one good talk on each topic.
Comments