AI is supposed to make things easier, right? That’s the pitch. Smarter automation, faster problem-solving, less time wasted digging through docs. But as this experience proved, bad AI troubleshooting can be just as frustrating as bad human support—if not worse.
This is a case study in how AI troubleshooting can feel like an intricate puzzle—one that requires patience, adaptability, and knowing how to ask the right questions—how it wasted way too much of my time, and the hard lessons learned that will guarantee I don’t fall for these mistakes again.
Let’s break it down.
🛑 First Wrong Turn: Assuming Instead of Verifying
I was setting up Continue.dev in VS Code with a locally hosted QwQ-32B model running in LM Studio. The goal was clear: leverage LM Studio’s OpenAI-compatible API to power Continue.dev without relying on cloud-based models. I wanted full control over my AI workflow—no API limits, no hidden costs, just pure local processing.
To make this work, Continue.dev needed a properly structured config.json
file to recognize and connect to LM Studio’s API. The logic was straightforward: define the API base, specify the model name, and it should all just work.
But that’s not how it played out.
Instead of a quick and easy setup, I found myself working through an evolving challenge—a multi-hour back-and-forth with an AI assistant that initially provided misleading advice, incomplete configurations, and unhelpful troubleshooting steps. But I knew there had to be a way forward; I just needed to figure out how to guide the AI toward better answers. At one point, I was completely stuck—none of the suggestions made sense, and I was going in circles. That’s when I turned to a more manual approach: pasting screenshots of error messages and configuration files directly into the AI chat. Surprisingly, this accelerated the debugging process because it forced the assistant to engage with the actual problem rather than making generic assumptions. I had to manually untangle bad recommendations, incorrect settings, and misleading troubleshooting steps before finally getting it to work.
Each misstep was another clue in the debugging puzzle—conflicting advice, incorrect settings, and repeated tests that should have worked but didn’t. It became clear that I had to adjust my approach and find a more effective way to interact with the AI. Here’s every frustrating wrong turn, and what I should have done differently from the start.
Instead of taking a structured approach, the AI made its first critical mistake: it assumed Continue.dev would automatically detect the model.
❌ Wrong assumption: “Continue.dev should just find the model if LM Studio is running.”
- Reality: It doesn’t. Continue.dev requires an explicit model name from LM Studio’s API.
❌ Wrong assumption: “The default API base should work.”
- Reality: LM Studio runs on
http://localhost:1234/v1
, and this must be manually configured.
✅ What should have happened instead:
- Use screenshots and direct logs. Pasting actual error messages into ChatGPT provided better debugging help than relying on verbal descriptions alone.
- Immediately check
http://localhost:1234/v1/models
to retrieve the exact model name. - Confirm the correct API base before assuming defaults.
🔄 Lesson learned: AI is a powerful tool, but it works best when you provide precise, verifiable data. Never assume auto-detection—always verify API behavior first.
🚩 Second Wrong Turn: Getting the provider
Setting Wrong
Once I provided the LM Studio API model list, the AI made its next mistake.
❌ It said: “Use "provider": "ollama"
.”
- Why? Because LM Studio runs local AI models, and it incorrectly assumed Continue.dev handled all local models via the
"ollama"
provider. - Reality: LM Studio mimics OpenAI’s API format, so Continue.dev expects
"provider": "openai"
instead.
This caused more wasted time because:
- I tested the wrong setting.
- The AI backtracked and switched answers.
✅ What should have happened instead:
- Check LM Studio’s API documentation first to confirm how Continue.dev handles OpenAI-compatible APIs.
- Never switch answers without verification.
🔄 Lesson learned: Triple-check API compatibility before changing settings.

⚠️ Third Wrong Turn: Ignoring the Error Messages Too Long
Once the "provider"
issue was corrected, Continue.dev still wasn’t loading the config file.
At this point, VS Code was giving clear error messages, but instead of immediately focusing on them, I wasted more time iterating through partial fixes.
The VS Code error logs said:
Missing property 'apiKey'
Missing property 'title'
But instead of fixing both problems at once, the AI only corrected "apiKey"
, then had to circle back to fix "title"
separately.
❌ It should have immediately analyzed the logs in full instead of attempting piecemeal fixes.
✅ What should have happened instead:
- Read the full error log the moment the config didn’t load.
- Fix all missing properties at once, not one by one.
🔄 Lesson learned: Read error logs fully before troubleshooting. Fix everything at once.
⏳ Final Wrong Turn: Putting "title"
in the Wrong Spot
By now, most of the config.json
file was correct, but Continue.dev still refused to load it.
❌ The AI placed "title"
at the root level of the JSON file.
- Reality: Continue.dev expects
"title"
to be inside the"models"
array, not at the root.
After correcting this, everything finally worked.
✅ What should have happened instead:
- Check the correct JSON schema for Continue.dev before assuming where
"title"
belongs.
🔄 Lesson learned: Always reference the tool’s expected schema before placing JSON fields.
🚀 Takeaways: How To Debug Like a Pro (Save This for Next Time!)
Quick-Action Checklist
✔ Check the API manually first. Hit /v1/models
before configuring anything.
✔ Verify the provider setting. Does it follow OpenAI’s API (openai
) or a custom setup?
✔ Read the full error log. Fix all issues at once, not one at a time.
✔ Double-check JSON structure. Don’t assume fields are placed correctly—validate first.
By following these takeaways, I’ll avoid the same mistakes in the future—and any developer facing similar issues can do the same:
1️⃣ Always Validate API Behavior First
- Never assume auto-detection. Go to the API endpoint (
/v1/models
) and get exact model names before configuring anything.
2️⃣ Check Compatibility Before Changing Settings
- Verify API expectations.
"provider": "openai"
was correct because LM Studio mimics OpenAI’s API. This should have been confirmed before suggesting"ollama"
.
3️⃣ Read the Full Error Logs Immediately
- Instead of troubleshooting one issue at a time, read all errors first and correct everything at once.
4️⃣ Cross-Check JSON Schema Before Placing Fields
"title"
was in the wrong place. This would have been avoided by checking Continue.dev’s schema earlier.
🌟 The Silver Linings: Lessons and Wins
As frustrating as this experience was, there were some unexpected positives. When AI-generated suggestions kept leading me in circles, manually pasting screenshots and full error logs into the chat actually helped accelerate debugging. Once I provided the AI with raw data instead of just describing the problem, it started giving more relevant answers.
This experience also reinforced a critical skill: knowing when to stop trusting AI recommendations blindly and switch to manual troubleshooting techniques.
🛠 What’s Next?
Now that Continue.dev is working with LM Studio and QwQ-32B, I’m moving on to fine-tuning and optimizations. If you’re setting this up yourself, don’t make the same mistakes I did—validate everything before you start changing things.
If you’ve had a similar troubleshooting nightmare, drop a comment or share your experience. What’s the worst AI-generated advice you’ve ever followed? 🚀🔥