Come join Gary and Scott as they show you the easiest way in the world to run an AI Large Language Model on your home machine, even if your machine doesn't have a lot of power!
17 RSVP'd
Come find out how easy it is to run an LLM at home, without it connecting to the internet while it's running. These models not only will not share what you're doing with the rest of the internet, they can't. So your private info and what you do with it remain private. The most laborious part of running these models is sitting in your chair, drinking coffee while you wait for the model to finish downloading. Yes, it's really that easy.
NOTE: Due to the current surge of Covid-19 we will be conducting meetings remotely until things settle back down. According to one analysis of the numbers, current levels of virus in the wastewater treatment plants are worse than they were during Omicron. According to all interpretations of the data, it's really bad. We are tracking the situation at https://covid.cdc.gov/covid-data-tracker/#wastewater-surveillance and, fortunately, Ohio is reporting from a far larger percentage of our treatment plants than most other states. This allows us to see how bad it is. If you look at the red dots map, don't be fooled into thinking it's bad in only a few places. The truth is that most places aren't bothering to test, and are unaware of how good, or bad, things are in their areas.
Back to the fun, there are a few things to know about running LLMs at home:
- Almost (almost) any machine can run an LLM of some size.
- How powerful a model you can run is completely dependent on your video memory. Your CPU and RAM are almost (almost) irrelevant.
- The smallest amount of VRAM we'd recommend trying to use is 3G. The results won't be great at that size, but it will allow you to run a 7 billion parameters that's been processed to run on smaller machines. If you don't have at least 3G of VRAM, this might be an exercise in frustration.
- We are unable to help with Macs that use shared memory for video. This approach is designed to work on dedicated video RAM on a video card, preferably DDR6 and the cards that work best (at the moment) are NVIDIA.
- It's possible to run with some operations running on RAM if you don't have much VRAM, but RAM is much, much slower. Keep that in mind if you want to run bigger models.
Contact Us