Integrate LGN directly into your AI system
LGN works directly with AI system developers.
We focus on high data volume, sensor-based applications, fleet scale deployments and active / continuous learning systems.
LGN can be deployed on any edge hardware, without the need for high end GPUs. We're already live as a standalone web-app and will soon have integrations live with all major cloud providers.
The two primary deployment routes are cloud only, where you use LGN to optimise a cloud-based AI workflow and edge + cloud, where you deploy LGN both in your cloud and across your node / edge deployment, for example in vehicle or IoT devices.
Edge > Cloud
Deploy the LGN model directly onto edge hardware (GPU or custom chips) across your fleet or IoT deployment. The model then selects and compresses data to send to LGN decoder running in your private cloud.
Deploy the LGN model and decoder within your private cloud, for example using Amazon SageMaker or Azure ML Studio. LGN typically reads from an ingest bucket, processes and writes selected data back to an egest bucket.
Get in touch
Contact LGN using the form or details below.
Usual hours of business are 9am to 5pm UK time.
+44 (0)20 3488 2138