I built an AI Code Sandbox

Boilerplate for Claude & ChatGPT web code editor using Next.js and Vercel

Nick Confrey
4 min readJul 15, 2024
Try the final result at https://miniapps.getseam.xyz/

Creating software online is too hard. In the early days of the internet it was simple enough to change HTML to customize pages, but now most online platforms are locked walled gardens. The promise of AI is that it will allow anyone to create software, even if they don’t know how to code. I wanted to experiment with a web editor that would allow anyone to create an app for themselves.

bringing back customizing the internet again

For this project, I created an online code editor that allows a user to describe the app they’d like to build, and then the AI builds it for them, alongside a live reloading code editor and preview. There were several stages of the project:

  1. Build the webapp at host it at miniapps.getseam.xyz
  2. Call the AI endpoint — for this article, I’m using Anthropic’s Claude 3.5
  3. Use the generated code to populate a builder and code preview.

Let’s get into it!

Next.js Skeleton App

I got started by cloning the Vercel Next boilerplate and deploying it. This gets a basic webapp up and running with a localhost and a deployment URL, ready for more complicated stuff to come.

Backend AI API Calls

In order to use Claude 3.5 Sonnet (or any AI API, like ChatGPT), I needed to hit the API endpoints from the server, rather than the client. That’s for API key safety and for CORS reasons. If you’re getting Anthropic CORS error (like Access-Control-Allow-Origin being blocked by CORS policy), you’ll need to call the API from a server environment. Rather than setting up an entire backend and server, I decided to use Vercel functions, which was a clear choice given I was already hosting my frontend webapp using Vercel.

To get started with Vercel Functions, I created a new file in the folder /app/api/claude/route.ts . Vercel automatically deploys any files under the /api folder when you run vercel dev . Then, it was as simple as hitting the Anthropic API endpoint using their docs:

export const dynamic = 'force-dynamic'; // static by default, unless reading the request
import Anthropic from "@anthropic-ai/sdk";

export async function GET(request: Request) {
const anthropic = new Anthropic({
apiKey: process.env["REACT_APP_ANTHROPIC_API_KEY"]
});

const userInput = new URL(request.url).searchParams.get("userInput");
if (!userInput) {
return new Response("Missing userInput query parameter", { status: 400 });
}

const prompt = `Create a miniapp`

const msg = await anthropic.messages.create({
model: "claude-3-5-sonnet-20240620",
max_tokens: 2551,
temperature: 1,
system: prompt,
messages: [
{
"role": "user",
"content": [
{
"type": "text",
"text": "create a miniapp from this user input: " + userInput
}
]
}
]
});
return new Response(JSON.stringify(msg))
}

After that, there was some UI work to be done to create a textbox and accept user input. Then, calling the server function from the frontend looked like this:

const response = await fetch(`/api/claude?userInput=${encodeURIComponent(userInput)}`);
const data = await response.json();

Tada! At this point I was successfully calling Claude from any user input.

Prompt Engineering

Next, I needed to write the prompt to actually generate the code I was going to display to the user in the next step. This definitely took some trial and error, and things will likely change quickly after this article is written. Nevertheless, here are some tips for prompting Claude that worked well for me:

  • Make sure it only ever returns code — for this, I prompted “JUST WRITE CODE, NO YAPPING!”
  • Claude respects capitals more than lowercased text.
  • Guard against bad inputs and users attempting to take your model off the rails, like: “Return an error message for any prompts that are off-topic.”

Creating a Code Playground

So, I have a code file coming back from Claude, now the challenge is to interpret it and run it in my browser webapp. Ideally, the user would be able to make quick edits and have them live reload the preview. Rather than build the code interpreter myself, I used sandpack/react. Sandpack is the open-source underpinnings of Codesandbox, and includes all the stuff you’ll need to get a barebones code sandbox with a live reloading and preview window for React code. Get started with Sandpack here.

Here’s the template for a basic half code, half demo screen:

<SandpackProvider template="react" theme="auto">
<SandpackLayout>
<SandpackCodeEditor />
<SandpackPreview />
</SandpackLayout>
</SandpackProvider>

Once I had the basic version of Sandpack running, showing the default React template code editor and preview, I piped in the response from Claude as a new file to the Sandpack provider:

<SandpackProvider
template="react"
theme="auto"
files={{
"/NewApp.tsx": response!,
"/App.js": App,
}}
options={{
autoReload: true,
activeFile: "/NewApp.tsx",
externalResources: ["https://cdn.tailwindcss.com"]
}}
customSetup={{
dependencies: {
"p5": "latest"
}
}}
style={{ flex: 1 }}
>

I included the response from Claude as a new file, named NewApp.tsx, and then had another App.js that was a wrapper around the expected output, informed by my own business logic of conforming to the Seam miniapp protocol.

Important to note, I included Tailwind in Sandpack by using the externalResources. Given that Claude and GPT are really good at styling React components with Tailwind, it helped a lot with the visual style of the code that the model produced.

Read more about Sandpack:

Conclusion

All the code for this article is open source in the Seam Magic repo on github. If you’re curious about the next generation of coding on social platforms (and miss the days of coding on MySpace & Tumblr), check out Seam!

--

--

Nick Confrey
Nick Confrey

Written by Nick Confrey

Lessons learned from building iOS apps at scale. Twitter: @nickconfrey

No responses yet