How Digital Twins make data operational: Transforming information into action.
Entopy is at MWC 2024.
The implications of AI on GDPR compliance.
This blog gives an introduction to, and a snapshot update, of our micromodel technology as well as our future roadmap. The technology is designed to address challenges in applying Artificial Intelligence into real-world, dynamic operational contexts to support the capture and delivery of effective decision intelligence.
What are micromodels?
The concept is pretty simple. We look at a large, complex problem and break it up into smaller, more specific chunks, deploying targeted AI models to specific parts of the problem. We then network multiple models together with other data and models, creating a dynamic network of micromodels.
The technology addresses a few key challenges in the deployment of AI in complex, dynamic and multistakeholder environments.
Initial use case.
The initial iteration of our micromodel technology has naturally been use-case-led, delivering dynamic intelligence of future traffic flows to a major UK port, which we will elaborate more specifically on in a later blog and case study. But at a high-level, it required micromodels to be deployed as part of a Digital Twin across the strategic road network of the port.
This required us to develop AI models capable of predicting traffic flows in specific areas of road, and leverage data from third-party models such as weather forecasts and network models together with real-time event-based data such as traffic accidents and scheduled road maintenance, with an overarching orchestration model.
This was a perfect initial use case for the technology as road networks naturally have many variables – they are dynamic. There are regular occurrences of ‘black swan’ events, and many stakeholders involved, have a need for iterative deployment (more so in the extension of an initial twin over time) and within the context of the use case, supporting operators to make more data-driven decisions regarding traffic management protocols, a high need for explainable outputs to build confidence in the intelligence and ultimate, drive action.
And, although use-case-led, the initial iteration of Entopy’s micromodel technology has wide applicability. By breaking the overall problem into smaller, more specific chunks, you create ‘atomic models for atomic problems’. For example, the micromodels deployed for this use case can easily support use cases for other infrastructures such as airports, councils, shopping centres, stadiums etc. where predictive intelligence regarding traffic movement is needed.
Over the past months, we have been able to validate the technology, prove key aspects, achieve many key technical milestones, automate key aspects of the technology to ensure high repeatability and transferability to other use cases and flesh out our future roadmap.
The performance of AI models within the use case has been exceptional. By breaking the problem into smaller chunks, increasing the resolution of specific parts of the problem. We have been able to get a very good model/data fit very quickly.
The models are purposely small, with the initial traffic flow model comprising only 26 features. Multiple instances of these models are deployed, operating independently, and networked together. Not only does this support dynamic intelligence but also acts as a good check and balance across the network.
We have now deployed many models across the UK with an average accuracy of >85%. Below is a screenshot of a model recently deployed showing the model capturing a significant and outlining spike in traffic flow. The blue line is the prediction, the green line is what happened.
Each of the models feeds into our core software which uses concepts such as RDF and ontology to orchestrate the outputs together with other data inputs. Ultimately, the network informs an overarching algorithm that predicts traffic to the port.
We have measured the efficacy of the network against the port’s commercial data with an accuracy of >90% since initial deployment.
The network of micromodels has effectively captured real-time event-based data such as traffic accidents and road maintenance, updating the overall prediction accordingly.
The image below shows some micromodels deployed within our macro view interface. Each of the blue dots is an independent AI model, predicting traffic at that specific part of the road. These form nodes on a network with real-time events such as congestion, traffic accidents and road maintenance being captured, classified, and introduced into the network based on their location.
Addressing the obvious downside.
What we are effectively describing is a multi-layer perceptron (MLP) but built node by node or layer by layer with each being an independent AI model.
There are advantages but the obvious challenge is the time it takes to deploy. And it is here that Entopy’s micromodel technology development has focused.
We have developed scripts and tools that automate the entire model deployment process meaning we can deploy hundreds of instances of models in a very short period. This has been extensively tested and will be repeated to other use cases.
Our ambition for the micromodel technology is to be able to deliver effective operational and predictive intelligence to any operational environment. We are growing our model library. As we deliver new and extended use cases, we will naturally grow a powerful library of micromodels that can quickly ‘click in’ to new use cases.
But we have a few things in the roadmap that will accelerate this. Our recently announced partnership with the University of Essex and their Institute for Analytics & Data Science (IADS) is the start of a long-term partnership to develop model generalisation and model federation.
These technology milestones will rapidly accelerate our timeline to being able to realise our overall mission and ensure we can deliver effective operational and predictive intelligence to many more use cases.