Making play-based maths easier for teachers to assess – testing a blend of low and hi tech approaches

Michael Rumbelow and Professor Alf Coles lead one of our seedcorn-funded projects that aims to help boost children’s confidence in maths.

Using an AI driven app, the interchange between learning is explored through traditional use of blocks.  Here they discuss how digitising this learning aid could benefit teacher classroom assessment and the challenges of developing novel technologies as education specialists.

In 1854, the first English-speaking Kindergarten opened in London, based on the play-based pedagogy of Friedrich Froebel (1782-1852), who designed his Kindergarten curriculum around play activities with wooden blocks. Later plastic versions of Froebel’s blocks were developed, which evolved into Lego – now the world’s largest toymaker – as well as into interlocking plastic cubes for primary mathematics classrooms – which the characters in the popular CBeebies cartoon series Numberblocks are made of. And more recently, free play with digital cubes became the basis of Minecraft, the most popular video game of all time.

Figure 1. Sketches of using wooden cubes to model halving and quartering from an 1855 Kindergarten handbook.

Clearly, block play is a popular activity among children. And in schools there has also been a resurgence in the use of physical blocks in primary mathematics classrooms, following the government’s policy since 2016 of promoting so-called ‘Asian mastery’ approaches to teaching maths, as used in Singapore, China, South Korea and Japan, which make extensive use of physical blocks as concrete models of abstract mathematical concepts, such as counting, addition, multiplication etc. We were interested in researching children’s interactions with physical blocks from a mathematics education perspective, and one of the key challenges was how to capture data on children’s interactions with blocks for analysis.

Previous studies of block play have focused on gathering data variously through sketching or taking photos or videos of children’s block constructions, or embedding radio transmitters in blocks which could transmit their positions and orientations. Recently developments in computer vision technology offer novel ways of capturing data on block play. For example, photogrammetry apps such as 3D Scanner can now create 3D digital models from images or video of objects taken on mobile phones, and AI-based object recognition apps are increasingly able to detect objects they have been trained to ‘see’.

We felt there might be an opportunity to detect and digitise the positions of wooden or plastic cubes on a tabletop directly through a webcam, so that the coordinates of the corners could be used to create virtual animated models of stages of block constructions which could then be explored in various ways, such as in immersive virtual 3D environments, by both researchers and students. This abstracted coordinate data would also enable patterns of real-world block constructions to then be analysed statistically, for example using AI pattern recognition algorithms.

 

 

 

 

 

Figure 2: A sketch of 8 cubes being used to model a garden seat in an 1855 Kindergarten guide (left); a photo of a reconstruction of the sketched model with wooden cubes (centre); and a screenshot of a prototype 3D model generated from the reconstruction with photogrammetry app 3D Scanner (right). (The 3D model is viewable here: http://3d-viewer.xplorazzi.com/model-viewer/index.html?modelId=629e943a3aaf2b171525a9b5 )

With funding from the BDFI we were able to form a small project team of two researchers in the School of Education, and a software developer and the head of a local primary school, in order to develop an app to trial with children in the school.

Technical Challenges

The problem of capturing positions and orientations of blocks digitally almost immediately became more challenging than we had anticipated. Initially we had hypothesised that detection of straight edges would be a relatively simple computer vision task, however in practice traditional edge-detection algorithms proved unreliable in detecting the edges and extrapolating cubes positions, with multiple confounding issues including lighting, shadows, orientation, variations in perspective and vertical position, variations in wood texture and colour, and hidden edges under stacked blocks. One approach we attempted was to paint each block in a different colour to aid recognition, but this too was unsuccessful.

Figure 3. The move from plain wooden blocks to painted blocks to Cuisenaire rods to aid recognition

Finding ourselves stuck in terms of successful block recognition, we decided on two radical changes in direction: (a) to move from traditional edge-detection to AI-based computer vision algorithms, such as Mask-RCNN, and (b) to drastically simplify the recognition problem by focusing on Cuisenaire rods – standard classroom manipulatives which are 1 cm to 10 cm long, each coloured in a distinct colour, and typically arranged flat on the table, avoiding the issue of stacked blocks (Figure 3).

Our developer found that a gaming laptop equipped with a GPU processor was powerful enough to run Mask-RCNN, and with sufficient training on approximately 150 images, could detect the positions of Cuisenaire rods in an image from a live webcam feed within 2-3 seconds of processing time, which we felt was acceptable from a usability point of view.

With a feasible solution now successfully implemented for rod detection, the developer could now relatively easily add code which generated images and sounds associated with each rod, such as displaying a graphical image of it on screen, and speaking its colour or length. We trialled the app with Year 1 children in a local primary school, and produced a paper about the trial for the British Society for Research into Learning Mathematics.

Figure 4. The experimental set-up as used in the initial trial in a primary school

Lessons learnt

As educational researchers with little experience of developing apps such as this, we have learned many lessons. One is the value of iterative, so-called ‘Agile’ approaches which enable rapid experimentation and pivoting of direction in order to solve problems that inevitably arise in developing novel technologies.

Another is the value of the ecosystem of open-source libraries, shared expertise and documentation which grows over time around any novel technology, and in particular complex open-source AI algorithms and tools such as Google’s Tensorflow, and Facebook’s Detectron. Occasionally, a novel technology we tried looked attractive in terms of affordability – for example the OAK-D camera with built-in AI camera – but was so novel at the time that the supporting knowledge eco-system had not yet developed which effectively made it unfeasible to develop for in the short term.

And a third lesson learned is the critical importance of training data for AI computer vision algorithms/  or example, to recognise blocks placed on a school desk in daylight, the algorithm should be trained on images from as similar an environment as possible, but randomised sufficiently to avoid ‘overfitting’. This process of training AI algorithms also provided us with rich insights, from an educational conceptual perspective, into current neural network models and neuroscientific theories of how human brains learn – as well as some of the power and limitations of these theories.

Future challenges

With a prototype now delivered which can successfully recognise Cuisenaire rods, running on a GPU-equipped laptop and webcam, we are now looking towards potential future phases of development.. We’d like to revisit recognising plain cubes, and to make the app accessible on other devices like low-spec computers or mobile phones, allowing us to gather data on block play more widely from schools, as well as enabling children and their families to use the app at home.

We would also like to develop an AI app to analyse the block play data and recognise patterns, for example symmetries in constructions, or commonalities and differences across settings or over time, or compared with digital block play. Currently assessment of children’s activities in pre-school is often, like the curriculum, very different from primary school, and an app that could gather and showcase a portfolio of children’s real-world block play – potentially in virtual worlds if they wish – might enable more continuity in formative assessment across the transition from pre-school to primary.

Expanding the remit

We are also interested in the applications of a simple set of physical blocks as an interface, for example for playing musical notes, or modelling language, or atomic reactions in climate science, as well as for children with visual impairments who may not be able to see touch screens easily. And there also is the potential to translate the digital 3D models of children’s physical block constructions into current 3D online block metaverses such as Minecraft, to bridge the two worlds.

We are keen to work with partners across creative and technical disciplines who are interested in exploring opportunities to augment physical block play with multi-modal digital experiences. If you would be interested in learning more or a chat about the project please get in touch with us: alf.coles@bristol.ac.uk

Leave a Reply

Your email address will not be published. Required fields are marked *