Rapidly build AI products

Meteron handles load-balancing, ordering, storage and limits for your AI systems.

Sources
Sources
Hero
Models

All-in-one AI toolset

Free developers from time-consuming, unnecessary processes that slow your work, so you and your team can focus on creating

Kickstart your project

Putting the Meteron in front of your AI inference servers will immediately solve the work queueing and storage problems. Instead of spending unknown amount of money on trying to autoscale your servers, better option is to process them asynchronously.

Batteries included

Meteron is a great fit for startups. Use it to enforce business rules for your customers. Automatically prioritize paid customers while moving your free users to utilize the servers during idle times.

Auto-scaling, enhanced security

With Meteron, you can scale your AI inference servers up and down automatically. Meteron also provides an easy to use API and SDK to integrate with your existing infrastructure.
Feature

An ecosystem of integrations

Meteron Logo
Icon 01
Icon 02
Icon 03
Icon 04
Icon 05
Icon 06

Pricing

Start for free, upgrade at any time.

Monthly
Yearly (-10% coming soon)
Free
$0 / mo
Professional
$39 / mo
Business
$199 / mo
Usage
Usage
Usage
Admins & Members
1
Admins & Members
5
Admins & Members
30
File Storage
5GB
File Storage
300GB
File Storage
2TB
Image Generations
1500
Image Generations
10 000
Image Generations
100 000
Features
Features
Features
Per user metering
Per user metering
Per user metering
Elastic queue (absorb high demand spikes)
Elastic queue (absorb high demand spikes)
Elastic queue (absorb high demand spikes)
Server concurrency control
Server concurrency control
Server concurrency control
Intelligent QoS
Intelligent QoS
Intelligent QoS
Cloud Storage
Cloud Storage
Cloud Storage
Performance Tracking
Performance Tracking
Performance Tracking
Automatic Load Balancing
Automatic Load Balancing
Automatic Load Balancing
Automatic Retries
Automatic Retries
Automatic Retries
Custom Cloud Storage (your own S3, GCS, Azure Storage, etc.)
Custom Cloud Storage (your own S3, GCS, Azure Storage, etc.)
Custom Cloud Storage (your own S3, GCS, Azure Storage, etc.)
Data Export
Data Export
Data Export

Example Projects

Check out examples on how to build your own projects with Meteron.

Vuejs logo

A lightweight AI application built with Vuetify, Lightning AI and Meteron. Showcases queueing, asset loading and generation.

React logo

An advanced, multi-tenant example where users can register and generate their assets. Assets, queueing and metering are enforced by Meteron. Generation is done by replicate.com.

Python svg

A collection of individual function on how to send image generation requests through Meteron, query results, ensure per-user limits and more.

FAQs

Do I need to use any special libraries when integrating Meteron?

Nope, you can use your favorite HTTP client such as curl, Python requests, JavaScript fetch libraries. Instead of sending request to your inference endpoint you will send it to the Meteron's generation API. This API can be either blocking or non blocking, returning the reference to the generated image.

How do I tell Meteron where my servers are?

You can do it through the web UI if your servers are static or rarely change. However, we also provide a simple API that you can use to update your servers dynamically. This API can be used to update your servers in real time, for example, when you are using a AI platforms such as lightning.ai, runpod.io, etc.

How does the queue prioritization work?

By default Meteron provides several standard business rules. With each request you can specify priority class (high, medium, low) where high are your VIP users and will not incur any queueing delays. Medium priority class users will incur delays but will always be ahead of low priority users. Low priority requests will be served last, these can be your "free" users that are served when there is no load on the system.

Do I need coding knowledge to use this product?

Meteron is a "low-code" service where some knowledge about HTTP is needed. However, we do provide example on how to integrate Meteron. If you do get stuck, join our Discord server and we will be happy to help out.

Can I host Meteron server myself?

Yes, on-prem licenses are available. You will get a batteries included system that you can run on any cloud provider. Contact us for more info at [email protected].

What forms of payment do you accept?

We accept all major credit cards as well as direct wire transfers.

How does per-user metering work?

When adding model endpoints in Meteron (clusters) you can specify daily and monthly limits. WHen these limits are specified, each time you send an image generation request, add the X-User header with the user ID (or email) and we will ensure that this user cannot go above limits.

Join the AI-centric platform for building products