Skip to main content

How It Works

Inference API
  1. Connect Your Application: Use the Co-Builder interface to link your application or agent with the Inference API.
  2. LLM Selection: Choose the preferred LLM that suits your apps/agents.
  3. Real-Time Processing: The API processes data and delivers results with optimized speed and accuracy.
  4. Monitor Usage: Track performance metrics and adjust settings to optimize for cost and efficiency.

 

Deploy Your Own Apps/Agents
  1. Upload your pre-trained dataset: Developers upload pre-trained dataset or configure new ones within the Co-Builder platform.

  2. Select Resources: Choose from centralized or decentralized GPUs for deployment, ensuring scalability and cost efficiency.

  3. Train and Scale: Co-Builder handles training and scaling based on performance requirements.

  4. Monitor Performance: Use real-time dashboards to oversee training speed, utilization, and outcomes.

  5. Monetize: Get listed on the AI App Store for subscription earning rewards for usage.