← Back to Blog
Security AI Claude Code 6 min read ·

Security Audit of My AI-Built Portfolio.
Zero Critical Findings. Here's What I Learned.

I built this portfolio almost entirely with Claude Code. My biggest concern wasn't the design or the features — it was security. So I ran a full audit. Here's the unfiltered result.

security-audit — tnk-portfolio
Hardcoded secrets (API keys, tokens)✓ None found
.env files with real credentials in repo✓ Not present
.gitignore covers sensitive files✓ Correctly configured
API routes (/api/subscribe, /api/contact)✓ Input validation correct
Dependency vulnerabilities✓ npm audit → 0 found
XSS / SQL injection✓ Not found
Dangerous patterns (eval, exec)✓ Not found
0
Critical vulnerabilities
0
Exposed secrets
3
Optional improvements

The concern that wouldn't leave me alone

When I decided to build my portfolio, there was something that worried me more than the design or the feature set: security.

I'm not a professional developer — I have solid technical fundamentals, but this project was different. I built it almost entirely using AI, specifically with Claude Code.

And from day one, I had the same nagging doubt:

"What if the AI introduces a bad security practice without me noticing?"

I was diligent about asking for proper input validation, environment variable handling, and data protection throughout the build. But when you're not a security specialist, there's always a gap between "I followed best practices" and "I actually know this is safe."

So I did the logical thing: I requested a full security audit of the project and the GitHub repository while it was live in production.


Audit results

Good news: nothing dangerous was found. The repository follows solid security practices across every area reviewed.

AreaStatus
Hardcoded secrets (API keys, tokens)None found
.env files with real credentials in repoNot present
.gitignore covers sensitive filesCorrectly configured
API routes (/api/subscribe, /api/contact)Input validation correct
Dependency vulnerabilitiesnpm audit → 0 found
XSS / SQL injectionNot found
Dangerous patterns (eval, exec)Not found

What held up particularly well

Beyond "no issues found", the audit highlighted several practices worth calling out explicitly:

Bottom line: the repository and the deployment are not at risk from this project. The security practices in place are correct.

Optional improvements (non-urgent)

The audit also flagged three hardening suggestions — none of them critical today, but good practice to layer in over time:

Rate limiting on /api/subscribe and /api/contact to prevent automated spam or abuse of the form endpoints.
CSRF tokens on forms for additional protection against cross-site request forgery attacks.
Running npm audit periodically to catch new dependency vulnerabilities as they're disclosed — not just at build time.

The most important conclusion

The real takeaway from this exercise isn't technical — it's about mindset:

Building with AI doesn't mean neglecting security. But it does mean one thing: AI accelerates development. Responsibility stays with the human.

This process showed me that even without being a professional developer, you can ship solid, secure projects — if you rely on good tooling, apply best practices consistently, and verify with independent audits.

What actually made the difference

Looking back, three things separated "probably fine" from "demonstrably safe":

1. Being explicit in every prompt. I was specific about security requirements from the first session: environment variables for secrets, input validation on every endpoint, no hardcoded credentials. The AI delivered because I was clear about what I needed.

2. Reviewing what was generated. You don't need to be an expert, but you do need to understand what the code is doing at a high level. That human review layer is what turns AI output into production-ready code.

3. Auditing independently. The external validation closed the loop. Don't assume it's correct — verify it. That's what gave me actual confidence rather than a vague feeling of safety.

Security isn't a final state. It's a continuous process of review, auditing, and incremental improvement.

If you're also building with AI

If you're thinking about building projects with AI but worried about the security implications, here's my honest take:

The stack I used — Next.js + Supabase + Vercel + Claude Code — has solid defaults if configured correctly. The key is not outsourcing that configuration entirely to the AI without reviewing it.

Like this post

Comments

Leave a comment

Guillermo García

Guillermo García

Digital Analytics Engineer · TNK

← All posts

You might also like