Enterprise grade edge AI software.

LGN sells AI software products and works with enterprises as a long-term partner to design, develop and operate edge AI systems successfully at scale.

Orchestration framework for fleet scale edge AI.

Neuroform is an open source, cloud native framework for orchestrating fleet scale edge AI that builds on the best available tools in the Kubernetes and OpenShift™ ecosystem.

Photo of an autonomous car braking

Manage the deployment of edge AI at fleet scale

  • harness the best available enterprise class orchestration frameworks for AI and IoT
  • manage AI/ML model deployment across large fleets of edge devices
Photo of an wind farm inspection

Monitor and improve real world model performance

  • monitor how your models perform when exposed to real world data
  • gather relevant data and use it to retrain and update models over time
Illustration of an AI learning

Reduce transfer, storage and processing costs

  • optimise data selection trade-offs to reduce the data you need to transfer and process
  • without sacrificing visibility, insight and learning speed

Enterprise class, open source, cloud native.

Neuroform integrates into the Kubernetes, OpenShift and Open Data Hub ecosystem. This allows you to leverage best-in-class tools like Tensorflow, Kubeflow, Seldon, Prometheus and Grafana.

Train using preferred tools preferred tools
Deploy models as a service
Scale out across edge devices
Gather metrics and data

Orchestration frameworks

Extends best-in-class enterprise orchestration frameworks.

Building on the Open Data Hub architecture for AI/ML on OpenShift.

Machine learning pipelines

Integrates with Kubeflow, the machine learning toolkit for Kubernetes.

Supports Tensorflow Serving and Seldon Core, PyTorch and MXNet.

Monitoring and storage

Integrates with existing storage, metrics, alerting and reporting systems.

Including leading enterprise tools like Grafana, Prometheus, Kafka and Druid.

Extended for fleet scale, edge AI.

Neuroform extends this ecosystem with open source operators that provide fleet management, a supervised edge device runtime and configurable workflows for data selection and re-training



Machine learning











Edge hardware


Edge hardware

Fleet scale resource management

Custom Fleet resource that allows you to configure model deployment, supervision and data selection strategies.

Supervised edge device runtime

Docker based runtime with scheduler client, supervisor and MQTT broker for message bus based sensor integration.

Data selection and re-training

Configure data selection, transfer and storage strategies to feed into Tensorflow and Kubeflow re- training workflows.

Dashboard, monitoring and alerts

Monitor your edge AI fleet in real time and get alerts when model performance drops below thresholds that you configure.

Get in touch

Contact LGN now to find out more about our edge AI products and solutions.

Contact us Email sales@lgn.ai