38 lines
2.1 KiB
Markdown
Executable File
38 lines
2.1 KiB
Markdown
Executable File
# AI Blockchain Track PROPOSAL
|
|
|
|
- Authors: {Balázs Toldi, Bertalan Péter}
|
|
- Status: {Draft}
|
|
- Created: {2023-12-05}
|
|
|
|
## Summary
|
|
|
|
AI and LLM are proliferating and there are more and more advanced models with shocking capabilities.
|
|
We would like everbody to be able to use these models, but not everybody has the hardware.
|
|
Could we integrate AI with our existing blockchain ecosystems to possibly open up AI for more users?
|
|
|
|
|
|
## Motivation
|
|
|
|
Some models take an immense amount of resources to train.
|
|
However, _running_ the models also costs significant computational resources.
|
|
For example, even on a beefier machine with 8 GB of VRAM, LLaMa and Stable Diffusion seem to run at 1 token/sec for even the smallest model versions available.
|
|
|
|
What if we had an AIaaS (AI as a Service) system in a distributed, blockchain-based manner?
|
|
|
|
Users submit their prompts and the type of model they would like to run them on with some tokens as a reward and people with access to strong hardware can execute the models for them to win the reward.
|
|
The problem is of course, how does one know that the results they got from somebody else are indeed the results of running the models?
|
|
Also, when this communication takes place on the blockchain, both the prompt and the output will be public.
|
|
|
|
|
|
## Specification
|
|
|
|
We propose using Zero Knowledge Proofs to show that the output of the model is indeed the result of running the model on the prompt and not
|
|
some random output. It also makes the gas lower, since the computation is not done on-chain. In the future, we could also use this to make the prompt and the output private.
|
|
|
|
1. Users submit requests to the network (eg, a smart contract) regarding what prompt they would like to give to what model and what is the reward.
|
|
2. Other users with strong hardware can fulfill these requests.
|
|
But instead of sending transactions with just the output, they also send a Zero Knowledge Proof that the output is indeed the result of running the model on the prompt.
|
|
|
|
The network would use a native token to reward the users who fulfill the requests, and the users who submit the requests would also have to pay in
|
|
this token. This way, the network can be self-sustaining.
|