This page answers typical questions about the idea, benefits, architecture, operation and target audience of the Agentic Software Factory.
Yes. There is a dedicated whitepaper as an architecture deep dive, describing the architectural foundations of agentic software development: reference architecture, agent orchestration, AI guardrails, shared knowledge stores and integration with the software development lifecycle. It complements this product website with the conceptual depth the Software Factory is built on. An overview is on the Whitepaper page; the PDF (76 pages) is available there for free and without registration.
Depends on the tier:
Enterprise Air-Gap: no data transfer at all, since no network contact takes place. Details on the transparency page.
Yes, with an Enterprise Air-Gap license.
The LLM choice is a customer decision and is configured in the client. For air-gapped operation, only on-prem LLMs are possible.
The Agentic Software Factory is a local control plane for AI-assisted software development. It combines structured project intake, automatically generated markdown artifacts, run orchestration, Git and build transparency as well as traceable approvals in a single web application.
With direct shell usage, project definition, guardrails, approvals, Git discipline and status monitoring largely have to be organised manually. The Software Factory makes exactly these aspects visible and reusable.
Primarily for software architects, lead developers, technical project leads and teams who want to run AI-assisted development in a more controlled and reproducible way. It is not intended as a mass consumer product but as a productive working surface for technical stakeholders.
Files such as PROJECT.md or INSTRUCTIONS.md are easy to version, human-readable and well suited to AI-assisted tools. They form the working contract between user, platform and agent.
Version 1 focuses on orchestration, the run model, Git/build integration and traceability. A server-side UI with Spring Boot and Thymeleaf reduces complexity and makes the product core solid faster.
Not yet fully. Version 1 prepares roles, teams and workflow structures for it but starts with a single executing adapter. The goal is a clean growth path, not maximum complexity on day one.
Typical roles are Architect, Developer, Reviewer, QA, Security Reviewer, Documentation and Merge/Release. These roles can be stored as a domain model and later assembled into teams.
Typically status, current phase, logs, commit history, working tree state, build results, approval decisions and the related artifacts. The goal is that the user does not experience the run as a black box.
Yes, at least conceptually. The architecture cleanly encapsulates adapters. Early phases can therefore be run with a mock adapter or with an execution adapter that is swapped out later.
Yes, it is generally a good fit, since it emphasises traceability, structure, documentation, Git discipline and explicit approvals. The specific hardening and organisational embedding depend on the later deployment context.
The platform is designed for data-minimal and traceable use. Key principles are: no secrets in code, no tokens or session IDs in logs, traceable approvals, controlled adapters and clear technical gates.
Yes. That is exactly what project definitions, status models, artifact versions and run lists are for. Projects and runs should remain traceable independently of each other.
Through standardised project fields, consistent markdown artifacts, stored agent roles, team templates and documented approval and build rules. Individual experience turns into a repeatable process.
Best with a small internal example project, a clear target picture and the mock adapter. Only once wizard, artifact generation, run model and logs work cleanly should the real Claude Code adapter be used in production.