I built this portfolio almost entirely with Claude Code. My biggest concern wasn't the design or the features — it was security. So I ran a full audit. Here's the unfiltered result.
When I decided to build my portfolio, there was something that worried me more than the design or the feature set: security.
I'm not a professional developer — I have solid technical fundamentals, but this project was different. I built it almost entirely using AI, specifically with Claude Code.
And from day one, I had the same nagging doubt:
"What if the AI introduces a bad security practice without me noticing?"
I was diligent about asking for proper input validation, environment variable handling, and data protection throughout the build. But when you're not a security specialist, there's always a gap between "I followed best practices" and "I actually know this is safe."
So I did the logical thing: I requested a full security audit of the project and the GitHub repository while it was live in production.
Good news: nothing dangerous was found. The repository follows solid security practices across every area reviewed.
| Area | Status |
|---|---|
| Hardcoded secrets (API keys, tokens) | None found |
| .env files with real credentials in repo | Not present |
| .gitignore covers sensitive files | Correctly configured |
API routes (/api/subscribe, /api/contact) | Input validation correct |
| Dependency vulnerabilities | npm audit → 0 found |
| XSS / SQL injection | Not found |
| Dangerous patterns (eval, exec) | Not found |
Beyond "no issues found", the audit highlighted several practices worth calling out explicitly:
.env file is never committed to Git — correctly excluded via .gitignoreprocess.env, not hardcoded anywhere in the sourceBottom line: the repository and the deployment are not at risk from this project. The security practices in place are correct.
The audit also flagged three hardening suggestions — none of them critical today, but good practice to layer in over time:
/api/subscribe and /api/contact to prevent automated spam or abuse of the form endpoints.npm audit periodically to catch new dependency vulnerabilities as they're disclosed — not just at build time.The real takeaway from this exercise isn't technical — it's about mindset:
Building with AI doesn't mean neglecting security. But it does mean one thing: AI accelerates development. Responsibility stays with the human.
This process showed me that even without being a professional developer, you can ship solid, secure projects — if you rely on good tooling, apply best practices consistently, and verify with independent audits.
Looking back, three things separated "probably fine" from "demonstrably safe":
1. Being explicit in every prompt. I was specific about security requirements from the first session: environment variables for secrets, input validation on every endpoint, no hardcoded credentials. The AI delivered because I was clear about what I needed.
2. Reviewing what was generated. You don't need to be an expert, but you do need to understand what the code is doing at a high level. That human review layer is what turns AI output into production-ready code.
3. Auditing independently. The external validation closed the loop. Don't assume it's correct — verify it. That's what gave me actual confidence rather than a vague feeling of safety.
Security isn't a final state. It's a continuous process of review, auditing, and incremental improvement.
If you're thinking about building projects with AI but worried about the security implications, here's my honest take:
The stack I used — Next.js + Supabase + Vercel + Claude Code — has solid defaults if configured correctly. The key is not outsourcing that configuration entirely to the AI without reviewing it.
Comments
Guillermo García
Digital Analytics Engineer · TNK
You might also like