Background
Jeton.AI Logo

Jeton.AI

Edge AI Computing for Web3

Large AI Model Acceleration Platform on Edge Device for Web3 World

AI 3D Icon
Background

Jeton.AI: Running LLM Effectivelyon Mobile Device

Fine tunning/domain tunning/QAT model to fit specific AI needs with best intelligence

AI Hand

SOTA technique to compress the LLM model

LLM Model

Leading algorithm to maximize the computing power of a Mobile Device

Mobile Device

Multimodal/Hybrid mode support of Text Chatting, Visual contents generation and Understanding

Multimodal AI
Background
Decorative line

Jeton.AI Unique Strength

3D IA Illustration
  • Personalized/Compressed AI Model on mobile device
  • Unique Edge Computing Accelerator for Large Model running on Mobile Device
  • LLM and Visual AI Acceleration
  • Open SDK to build embedded AI DApp
  • Open Source of the Edge Computing Framework
  • World's Top AI Performance Optimization Team

Jeton.AI for On-device Multi/hybrid Model LLM AI

Model Illustrations

Jeton.AI: For On-device generative AI:

QWen2: 1.5B

Gemma2: 2B

LLAMA2: 7B

ChatGLM: 6B

Mistral-7B-Instruct: 7B

SD Model

Multimodal Model etc.

with native Huggingface compatible

With SOTA int4bit/3bit model quantization

Background

Jeton.AI Can be Easily Integrated with Web3

Bottom App Screenshot
Top App Screenshot
Center App Screenshot
Features List
Background

Jeton.AI Unified Platform

Text-to-Text AI

Text to Text Interface
  • Intelligent Chatting Bot
  • Intelligent Reasoning/Agent/RAG
  • Can be easily integrated
  • Easy to Use
  • Crypto Tokens for LLM Tokens

Text-to-Image AI

Text to Image Interface
  • Support Visual Content Generation
  • Support native Diffusers
  • Graph extract and Optimization
  • Graph Fusion
  • Visual Contents Understanding

Hybrid/Multimodal

Loading Placeholder
  • Integrated UI for both Text and Visual Contents
  • Smooth Transition
  • Easy to Use

No GPU required

Data Sharing can earn Crypto Tokens

Background
Decorative line

OpenAPI for Partner/DApp Integration

Saving AI Infra Cost for up to 50% for DApps that provide AI Service

  • • Model Pruning Service Edge AI Node
  • • Acquiring Service Hybrid Edge
  • • Acceleration with Cloud AI Service

Orbit Coin Tokenomics (Draft)

Tokens total:

• TBD

Token Inflation:

• Next 2 years 20% (Compensate for the growth of supply and demand, predicted growth rate 60%YOY)

Tokens distribution:

  • • xx% the Team
  • • xx% the Communities (10% reserved for the first year consumption, 50% reserved)
  • • xx% for the rest

Tokens mining (Model Data Provider) determined by the following factors:

  • • Developing App for with AI Edge SDK
  • • Time of App Usage
  • • Customized Model Data sharing
  • • Idling time

Token burning (Accelerator Users) determined by the following factors:

  • • LLM Tokens (exchanged to Crypto AI Tokens) used
  • • Required Accelerator types
  • • 5% Burned
World Best Icon

World Best AI Performance Optimization Team

  • • Expert of Best Workload Agnostic AI Acceleration Engine
  • • Leading the Development of MNN.zone of Edge AI Computing
  • • Expert of Large Scale AI/Rendering Workload Optimization
  • • Leading Expert in GPU/FPGA/NPU Virtualization and Sharing
Breaking Records

Breaking AI Performance Records:

  • • Stanford DAWN Bench AI Benchmark World Record Breaker
  • • TPCx-BB Benchmark World Record Breaker
APAC

APAC Largest Public Cloud GPU/FPGA/NPU Service's Technical & Business Founder of A* Cloud

Academic

Academic Contributions

  • • Top AI/Accelerator Conference Publications
  • • GPU/FPGA/NPU Patents worldwide
International

International Teams located in US, Singapore, HK