Deploying A.I. Locally on Your Own Notebook --- Just as Powerful and Easy. An experiment to use A.I. locally to analyze your spreadsheet(s)
- Jack Lau
- 2天前
- 讀畢需時 4 分鐘
For so many years, we used to dream of having a supercomputer on a single desk. Today, that dream isn’t futuristic — it’s our working reality. We are living through what I call the Desktop Renaissance, where the most advanced Artificial Intelligence doesn’t live in some distant cloud — it lives in your pocket and on your laptop. (jacklaublog)
Meet the New Titans: Qwen 3.5, DeepSeek-V3, and Llama 4
The pace of change in AI is staggering. If you haven’t explored the Ollama library lately, you’re missing the equivalent of a global intelligence arms race — all download-ready, all local, and many times free.
This isn’t chatbot evolution; this is reasoning machines arriving on our personal hardware.
Qwen 3.5 — The All-RounderAlibaba’s flagship bilingual model. In its 35B configuration, it strikes the current “sweet spot” for notebooks: a large context window, integrated vision, and the ability to understand charts and spreadsheets as part of a single workflow.
DeepSeek-V3 & R1 — The ThinkerAt an eye-watering 671B parameters with regional variants optimized for local deployment, DeepSeek’s “Thinking Mode” lets the model pause and reason before responding. It’s not just chat — it’s cognitive sequencing.
Llama 4 — The Western AnchorMeta’s latest family — from Scout (109B) to Maverick (400B) — pushes multi-modal capabilities. Seamless transitions between text, image, and code analysis make it a versatile partner in research and engineering.
Public Perception Thourhg Financial Return in the Equity Market
Some of the companies which released those models have been since gone IPO. We are interested to see how the public perceive them from an equity point of view.
Date (YYYY‑MM‑DD) | Model & company | Share price change since that date to now |
2025‑01‑23 | DeepSeek‑R1 – DeepSeek (private) datacamp+1 | still private reuters |
2025‑04‑28 | Qwen3 – Alibaba Group (BABA, NYSE) qwenlm.github | ≈ +20–25% (from around low‑120s in late Apr 2025 to mid‑140s now) finance.yahoo+2 |
2025‑05‑28 | DeepSeek‑R1‑0528 – DeepSeek (private) datacamp+1 | still private reuters |
2025‑06‑15 | GLM‑4.6 – Zhipu AI (HKEX: 2513) codegpt+1 | IPO 2026‑01‑08; ≈ +400–500% since IPO 36kr+3 |
2025‑10‑01 | Qwen3‑VL – Alibaba Group (BABA, NYSE) | ≈ −15–20% (from about 180–183 on 2025‑10‑01 to mid‑140s now) marketbeat+2 |
2025‑11‑01 | GLM‑4.7‑Flash – Zhipu AI (HKEX: 2513) codegpt+1 | IPO 2026‑01‑08; ≈ +400–500% since IPO 36kr+3 |
2026‑01‑08 | Zhipu AI IPO – Zhipu AI (HKEX: 2513) nikkei+2 | IPO 2026‑01‑08; ≈ +400–500% since IPO 36kr+3 |
2026‑01‑09 | MiniMax core model – MiniMax (Hong Kong IPO) reuters+2 | IPO 2026‑01‑09; ≈ +100% (roughly doubled) since IPO reuters+3 |
2026‑02‑02 | Qwen3‑Coder‑Next – Alibaba Group (BABA, NYSE) huggingface+1 | ≈ −10–15% (from about 168 on 2026‑02‑02 to mid‑140s now) investing+2 |
AI That Builds AI: The End of the “Coder” Barrier
The most profound shift isn’t just what these models can do — it’s how they let us build.
Recently I built an Excel AI Analyzer that talks to your private data locally. But here’s the twist:I didn’t write most of the code myself.I used Qwen 3-Coder and Claude Code as my lead developers.
We’re at a point where anyone who can describe logic clearly can build state-of-the-art software. You no longer need a PhD in Computer Science; you need curiosity and clarity of thought.

Why Local Matters: Ownership and Privacy
Why run these models locally through Ollama instead of a cloud service?
Zero Cost: No subscription — just the electricity to run your machine.
Total Privacy: Your spreadsheets, emails, and business secrets stay on your hardware.
No Latency: Works offline, on a plane, or anywhere your notebook can run.
Technology is at its best when it becomes a “bicycle for the mind”: empowering you to go further, faster, under your own power.
Deep Dive: How the Excel AI Analyzer Works
Let’s look under the hood of the Excel AI Analyzer — a tool that turns static spreadsheet files into conversational insights.
1. The Three-Tier Architecture
I built this around a “Golden Team” of components:
Frontend (The Waiter): A clean HTML interface that handles file upload and streams responses word-by-word — just like the big cloud services but local.
Backend (The Coordinator): A Flask Python server that securely holds your data in RAM and orchestrates AI calls.
AI Engine (The Chef): Ollama hosts models like Qwen 2.5 and DeepSeek. The backend calls Ollama to interpret and reason on your questions.
2. Solving the “Binary” Problem with Pandas
Most AI tools can’t eat binaries like Excel files. They see a wall of gibberish. So we translate.
I use Pandas as the translator:
Read the binary .xlsx into a structured table (DataFrame).
Analyze types (dates, currencies, text, etc.).
Summarize into a text “map” the AI can understand.
Suddenly, AI isn’t guessing at your data — it’s interpreting it.
3. Privacy by Design
Because everything runs in RAM and talks only to Ollama on your host:
Nothing goes to the cloud.
Data is wiped the moment you close the terminal.
No accidental uploads or lingering footprints.
Most “AI for Excel” tools require uploading sensitive sheets to remote servers. This one doesn’t.
Get Started
Curious to try this yourself?
📺 Watch the demo: Local AI to Analyze Spreadsheets📄 Explore the code: https://github.com/swanlandjack/TestOllamaSpreadSheetRead
The tools are free. The privacy is total. The potential is limited only by your imagination.



留言