OUR UPCOMING PROJECT
The Atom Project is our infrastructure framework that provides conversions of current AI Models, training of completely new models without the extended waiting periods needed for training. And the performance and ease of running the model on retail hardware such as Nvidia RTX 30, 40 and 50 series GPU's, And not just the 3090, 4090 and 5090's.
Currently, we are still in development of Atom, but here are some of the benchmarks we have run so far.
Single Layer Multi-Token Inference:
Device Used: Nvidia RTX 4070 Super 12GB
Model: DeepSeek-R1:1.5B (Converted to Atom)
VRAM Usage: 674.89 MB Total
Cosine Variance Accuracy: 0.0000 (Exactly the same as the standard model)
Inference Time: 0.000489 seconds
When testing the standard model we got these results:
Device Used: Nvidia RTX 4070 Super 12GB
Model: DeepSeek-R1:1.5B
VRAM Usage: 3.89 GB Total
Inference Time: 0.003128 seconds
And this is just the beginning!
In the near future we will begin rolling out larger parameter models that we will run benchmarks on, and we will provide all of the details so keep an eye on our socials, because there is definitely more to Atom than what is currently shown.
Our goal with Atom is to be the first step to AGI (Artificial General Intelligence), and I think with enough time we will directly accelerate the race to AGI.