Enterprise grade edge AI software.

LGN sells AI software products and works with enterprises as a long-term partner to design, develop and operate edge AI systems successfully at scale.

Model optimisation for edge hardware.

LGN Ultra is a model optimisation suite that optimises AI/ML models to run quickly and reliably on low cost, resource constrained hardware.

Optimise models for resource constrained edge devices

  • adapt your existing AI/ML models to run on real hardware
  • compile models to run natively for specific architectures and runtime environments

Quantisation, pruning, simplification, binary neural networks

  • reduce model complexity whilst maintaining performance
  • reduce memory use, speed up inference times and enable supervision and monitoring

Unique ultra low latency inference technology

  • radically reduce neural network inference time on GPU and FPGA
  • using LGN’s proprietary, patent pending ultra low latency inference technology

Supports

Optimising GPU inference

LGN worked with a major systems integrator to optimise their in-car perception system running on GPU hardware.

Our customer wanted to increase the field of view of their autonomous vehicle perception grid by adding additional LiDAR and camera sensors to their system. However, they were constrained by inference time (processing capability) on pre-specified GPU hardware.

LGN's solution was to optimise the perception models to reduce their size, complexity and inference times using a range of hardware and sensor specific optimisation techniques.

This allowed the system to deploy and process additional hardware sensors, increasing the field of view and perception grid resolution by 4x.

Result → 100x faster

We optimised inference to run in 1% of the original time, allowing the system to process 100x more data in the same time on the same hardware.

Get in touch

Contact LGN now to find out more about our edge AI products and solutions.

Contact us Email sales@lgn.ai