New Side Project: Monitower

March 15, 2026 · 3 min read

SWEProjectClaudeAgents

Monitower: Monitoring Solution

https://github.com/ParkerGoby/Monitower

I've been thinking about building an error monitoring tool for a while. My interest in such a tool comes from stakeholders loving visualizations, wanting to play around with Go in microservices, and wanting to build some sort of side project. I already knew of sentry, but its far too heavy of a software to run locally for something small. I then discovered GlitchTip, which is an open source solution with a free self-hosted version. I decided to build my own because:

  1. I want to build something from scratch
  2. This is a learning experience and I don't intend for this to be a tool people really implement.
  3. If something novel comes from this idea, then Monitower is the prototype.

There is a side conversation that I would be interested in writing about, how AI makes prototyping extremely efficient. In my opinion, who cares about AI slop if its just for a demo. If the demo is strong, then have people build the tool using best practices, careful reviews, and a more traditional engineering experience. Perfect use case in my mind.

But back to Monitower. I decided I want an additional learning experience from this endeavor: using agents entirely to build this app. I want to learn what agents are capable of, what they are weak with, what they are strong with. I want to know what guardrails are important, I want to learn a lot about the high level of this system. So, this tool will be written entirely by prompting an agent, and so far the implementation strategy has been working extremely well.

Agent Architecture and Implementation

I asked my lovely agent to talk to me about the project, and discuss several details about it. We built a high level architecture overview, and several markdown files describing each piece of the stack. That way future agents just need to look at what is implemented, what is left, and what would be an appropriately sized piece of work to tackle. Once these markdowns were made, I told the agent to make the system testable. I had it scaffold tests, linting, formatting, etc so all I had to do with the next agent was say "follow your claude.md rules, pick some work and go". So far, its gone smoothly. Also, I told the agents to follow a Test Driven Development (TDD) practice, always making tests before code.

So far, its been really solid. The first agent went through and made a queue and a fault generator. It made tests, it made the functionality, and it was quick. The next few agents took a slice and made their changes. Then, It started to get hung up on tests. The problem is it has sleeps in several places, and it was testing an engine with a chance to send errors. This made tests take FOREVER, and often times fail because when there is only a chance to error, there is a chance the edge case wasn't tested. So, future me now knows to be more specific when explaining testing structure. When testing integration functionality, force the input you need to make the test happen. When testing units, mock things (if possible), and don't rely on code you aren't testing. Overall, small bump in the road. My goal now is to finish up the tool and see if it performs well.

New Side Project: Monitower | Parker G