No-Code Training

Access to the high-performance development environment

Development environment for fast model training

Training data with maximum impact on performance

Efficient AI.on model development: With the AI.on development environment, we train high-performance models for contextual text and document analysis.

The innovative active learning technology of AI.on combined with the possibility to optimize raw data, annotation and model in short iteration cycles enable training in record time.

In addition to our own staff, data scientists of our customers can use the AI.on development environment and benefit from the innovative AI technology. Ask us.

Efficiently train contextual analysis models
Data-Centric Active Learning Technology
Up to 80% less training time
Train with up to 98% fewer examples
Reading-Order-Aware OCR
Rapid development even for long and complex documents

Get to precision quickly

Training contextual data extraction

The key to accurate contextual text and document analysis is training data. With the AI.on development platform, a few examples become high-performance analysis models - through an end-to-end workflow that continuously corrects and optimizes the data.

Efficient pre-processing

Perfect training data in record time

  • Reading-Order-Aware OCR in > 100 languages
  • Integrated document conversion supports PDF, image documents, Microsoft Office, JSON and text
  • Generative language for synthesizing training data
  • Full document history with “time-travel” feature

Intelligent learning & data enrichment

Data annotation with a fraction of the effort

  • Active Learning optimizes training data for superior results with up to 98% less annotated data
  • Optimized annotation interface for long documents
  • Fast training-annotation-retraining workflow without media breaks
  • Multi-user functionality with annotation guidelines

Optimized training process

One-click model training and model management

  • Latest generation Transformer models in various languages
  • Auto-scaling GPU training with one click
  • Model results and model comparison in a well-organized interface
  • MLflow integration for detailed training statistics

Performant and compatible

Automated deployment and optimal inference performance

  • Up to 70% shorter inference time through automatic model quantization and serialization
  • Prediction service with documented REST API
  • Deployment as Docker container for maximum compatibility

Efficient training

Fast iteration of the training data

Maximum leverage: Iteration on data, annotation, and models, rather than on model architecture and model parameters, can significantly accelerate the training process.

Talk to an expert
Contact
Newsletter