How much you actually need to run models like DeepSeek (671B) and ChatGPT 4 locally, i,e without internet connection almost the entire cost is only the Hardware Cost. (91% 𝐨𝐟 𝐜𝐨𝐬𝐭 𝐛𝐞𝐢𝐧𝐠 𝐣𝐮𝐬𝐭 𝐭𝐡𝐞 𝐆𝐏𝐔 𝐜𝐨𝐬𝐭) For a individual developer to run [#DeepSeek](https://www.linkedin.com/search/results/all/?keywords=%23deepseek&origin=HASH_TAG_FROM_FEED) locally and modify weights , tweak models, seems to be beyond reach as of now Question is when? will we be able to run models like these locally and we need not pay any subscription fee for that? I did some calculations and it turns out - ChatGPT has 2 trillion parameters - Rs 1 crore ($120,000) for GPUs with 1 TB+ VRAM - the best consumer GPU you can buy for your PC RTX 4090 have 24 GB VRAM—nowhere close GPU VRAM doubles every ~3 years. By 2032, consumer GPUs could hit 192+ GB VRAM. Quantization, distillation, and pruning are reducing model size by 10x without significant performance loss, like Deepseek used it and have similar performance with 10x less size and if this trend continues By 2026-27: Optimized GPT-4/DeepSeek models (100-200B parameters) could run on high-end PCs with 48-96 GB VRAM or Apple M-series chips. Cost? Rs 1 crore today could drop to ~Rs 10 lakh by 2030 The future is bright, think what you could do with that kind of AI power sitting right into your PC