Edge AI at scale in the real world.

LGN’s real world solutions help you scale out edge AI deployments, optimise models for real hardware and build resilience to edge cases and externalities.

Scale

The current paradigm in AI development is to gather training data, build simulations and train models in the lab. This leads to challenges with edge cases and a struggle to prepare models for the variety of data they’re going to encounter in the real world.

The new paradigm is to deploy models into the real world early in their development, at scale, as part of a parallel learning system. This exposes models to a higher variety of data and allows the system to achieve higher accuracy much faster than incremental lab based learning.

This approach is especially powerful for companies that have existing fleets of vehicles and edge devices that can be leveraged to enable massively parallel learning.

Operate large scale edge AI systems

  • orchestrate large, fleet scale deployments of edge AI
  • operate with low bandwidth and intermittent connectivity
  • supervise models and intelligently select data on device
  • control your transfer, storage and processing costs

Optimisation

Moving AI out of the lab and into the real world means processing real sensor inputs on real hardware.

For deployments at scale, the processing hardware is often highly cost constrained, which introduces very real challenges to run inference on high bandwidth sensor feeds within memory limits at adequate speed.

LGN's optimisation solution allows companies and data science teams to adapt models and AI system design to work successful on constrained hardware. It provides a range of standard techniques like architecture optimisation and model simplification, as well as a range of advanced proprietary techniques around data compression and ultra low latency inference.

Optimise models for real hardware

  • optimise models to run fast and reliably on edge devices with low cost, constrained hardware
  • quantisation, pruning, simplification, binary neural networks
  • unique ultra low latency inference technology

Resilience

To successfully operate AI systems in the real world, they need to be able to cope not only with unseen data, anomalies and edge cases but also with sensor degradation and failure.

LGN deploys edge AI systems at scale with a continuous supervision and learning workflow. This allows data scientists to see how their models are performing and to gather high value training data they can learn from and use to adapt to uncertainty.

In addition, our unique latent space technology makes sensor fusion and perception systems resilient to sensor degradation and failure. This allows systems to tolerate externalities and minimises the need for re-calibration and re-training once they’re operational in the real world.

Tolerate edge cases and externalities

  • use continuous learning to adapt and learn from real world data, environments, anomalies and edge cases
  • use LGN's unique latent space technology to handle sensor failure and degradation without needing to recalibrate or retrain your system

Get in touch

Contact LGN now to find out more about our edge AI products and solutions.

Contact us Email sales@lgn.ai