Two BUET graduates who built FabricxAI's 22-agent system, then turned the same architecture on SocioFi itself.
CEO & CO-FOUNDER · SINCE AUGUST 1, 2024
BUET graduate. Product strategist. The person who lived the problem SocioFi was built to solve — tried to deploy an AI-built prototype, spent three weeks hitting walls, talked to dozens of founders with identical stories.
Owns
Client relationships, business development, Studio pipeline quality, final approval on all external communications.
Reviews daily
Every HERALD-drafted email. Every SCOUT client spec. All frontend outputs from MIRROR. Client handoff docs from BEACON.
Typical day
Project briefings at 9am. Client calls 10–12. Agent output review 2–4pm. Business development evenings.
Background
BUET CS. Worked across product and operations before SocioFi. Deep experience with the non-technical founder perspective.
CTO & CO-FOUNDER · SINCE AUGUST 1, 2024
BUET graduate. Production systems engineer. Designed the 22-agent FabricxAI system from scratch, built DevBridge OS, and architected the NEXUS admin system that now runs SocioFi's own operations.
Owns
All technical architecture, agent system design, code review on every project, DevBridge pipeline, infrastructure decisions.
Reviews daily
Every ATLAS architecture. All FORGE backend code. Every SENTINEL security finding. NEXUS pipeline logs for active projects.
Typical day
Architecture reviews 8–10am. Code and agent output review 10–1pm. Pipeline design and R&D afternoons. Labs writing evenings.
Background
BUET CS. Production systems experience across manufacturing, logistics, and B2B software. Deep expertise in multi-agent AI architectures.
10 specialized agents built and maintained by Kamrul. Every Studio project runs through this pipeline.
Requirements Analyst
Turns vague briefs into airtight specifications.
How it works
Reads the client's project brief, interview notes, or initial description. Extracts functional requirements, technical constraints, edge cases, and success criteria. Outputs a structured specification document that every downstream agent relies on.
What it can't do
Clarify genuinely ambiguous requirements without human input. If a brief is vague on a critical point, SCOUT flags it for founder review rather than guessing — a wrong assumption at step 1 costs the entire pipeline.
Who reviews the output
Kamrul Hasan (CTO) reviews all SCOUT specifications before the pipeline proceeds. Arifur reviews anything touching client-facing features.
Sample output
Structured spec with 24 requirements across 6 feature categories. 3 flagged ambiguities sent for human clarification. 2 out-of-scope items identified and noted separately.
Pattern Researcher
Finds what already works so we don't rebuild from scratch.
How it works
Takes SCOUT's structured spec and researches proven technical approaches. Identifies relevant libraries, existing patterns, reference implementations, known failure modes in similar projects, and compatibility constraints. Outputs a research brief with ranked stack recommendations.
What it can't do
Evaluate whether an approach is right for a specific client's business context. HUNTER knows patterns — Kamrul knows which patterns match which situations.
Who reviews the output
Kamrul Hasan reviews all stack and library recommendations before they're committed to the project.
Sample output
Stack recommendation: Next.js + Supabase + Stripe. 4 alternative approaches evaluated with trade-offs. 2 known pitfalls documented with specific mitigations. 1 library conflict flagged.
System Architect
Blueprints the entire system before a line of code is written.
How it works
Takes SCOUT's requirements and HUNTER's research to design the full technical architecture. Creates component hierarchy, database schema, API contracts, data flow diagrams, and integration maps. Everything that follows is built from ATLAS's blueprint.
What it can't do
Make architecture trade-off decisions that depend on business priorities — such as choosing between a more scalable but complex architecture vs. a simpler one that ships faster. These calls go to Kamrul.
Who reviews the output
Kamrul Hasan reviews every architecture decision. No code is generated until the architecture is approved.
Sample output
18-table database schema. 34 API endpoints documented with request/response contracts. 3 architectural alternatives compared for the real-time sync component. Data flow diagrams for 6 key user journeys.
UI Developer
Builds the entire frontend from ATLAS's blueprint.
How it works
Runs in parallel with FORGE. Generates complete frontend from ATLAS's architecture and HUNTER's stack: pages, components, forms, dashboards, responsive layouts, and design system implementation. Outputs production-grade frontend code.
What it can't do
Make aesthetic judgment calls beyond the technical brief. MIRROR implements what's specified — ambiguous design decisions that require client taste or business judgment are flagged for Arifur to resolve.
Who reviews the output
Arifur Rahman reviews all frontend outputs for quality and alignment with client expectations before HAMMER integrates them.
Sample output
14 pages generated. 48 components. Full responsive implementation for mobile, tablet, and desktop. 2 design ambiguities flagged for client review. Design system tokens applied consistently.
Backend Developer
Builds the entire backend in parallel with MIRROR.
How it works
Runs in parallel with MIRROR. Generates all server-side code from ATLAS's API contracts: REST and GraphQL endpoints, business logic layers, database operations, authentication systems, background jobs, webhooks, and server-side validation. Outputs production-grade backend code.
What it can't do
Handle novel security edge cases not covered by established patterns. This is intentional — SENTINEL exists specifically to catch the security problems FORGE might not anticipate.
Who reviews the output
Kamrul Hasan reviews all backend outputs before HAMMER begins integration.
Sample output
34 API routes. Authentication system with session management. 6 background jobs. Database migrations for all 18 tables. 3 webhook handlers. Input validation on all endpoints.
Integration Engineer
Assembles MIRROR and FORGE into one working application.
How it works
Takes the frontend from MIRROR and backend from FORGE and assembles the complete, integrated application. Wires all API connections, resolves interface conflicts, handles environment configuration, establishes end-to-end data flows, and runs integration smoke tests.
What it can't do
Debug complex race conditions, performance bottlenecks under load, or environment-specific issues that require deep systems knowledge. HAMMER flags these for Kamrul with full context.
Who reviews the output
Kamrul Hasan reviews integration outputs and the integration smoke test results before SENTINEL begins review.
Sample output
Full-stack integration complete. 3 frontend/backend interface conflicts identified and resolved. 2 performance issues flagged with reproduction steps. End-to-end smoke tests passing for all 14 core user flows.
Security & Quality Reviewer
The agent that reviews everything everyone else built.
How it works
Performs a comprehensive review of the complete integrated codebase. Checks for security vulnerabilities (injection, auth flaws, data exposure, rate limiting), architectural problems, logic errors, code quality issues, and deviations from ATLAS's architecture. SENTINEL does not write code — it reviews it.
What it can't do
Verify that business logic is correct without understanding the client's domain. Domain-specific logic review requires human expertise. SENTINEL catches technical problems; Kamrul reviews domain accuracy.
Who reviews the output
Kamrul Hasan reviews every SENTINEL finding report. High-severity issues must be resolved before SHIELD proceeds. No deployment exceptions.
Sample output
2 high-severity issues: SQL injection risk in search query construction, missing rate limiting on authentication endpoint. 6 medium-severity issues. 4 low-severity style inconsistencies. All high-severity issues resolved before handoff to SHIELD.
Testing & Deployment
Tests thoroughly, deploys carefully, verifies completely.
How it works
Writes comprehensive test suites: unit tests, integration tests, end-to-end tests, and edge case scenarios. Runs the full suite and reports results. Then manages staging deployment, runs post-deployment health checks, and verifies production readiness before the final go-live.
What it can't do
Test user experience quality, subjective usability, or whether the product serves the client's business objectives. SHIELD validates that the code does what it's supposed to do — it can't validate that it does what the client actually needs.
Who reviews the output
Both Kamrul and Arifur review the pre-deployment report. Production go-live requires explicit founder approval.
Sample output
247 automated tests written and passing. 94% code coverage. Staging deployment verified with 12 post-deployment checks. Zero critical health check failures. Production deployment ready for approval.
Documentation Writer
Makes sure the work outlasts the project.
How it works
Generates comprehensive documentation across the entire codebase: README, API endpoint documentation with examples, architecture decision records, database schema documentation, deployment runbooks, environment setup guides, and client-facing handoff documents in plain English.
What it can't do
Write contextual documentation explaining why business decisions were made, or provide strategic guidance on how to grow the product. Those require human judgment and are authored by the founders.
Who reviews the output
Kamrul reviews technical docs for accuracy; Arifur reviews client-facing handoff documentation for clarity and completeness.
Sample output
README. 34 API endpoint docs with request/response examples. Architecture decision records for 6 key technical choices. Database schema reference with field descriptions. 12-page client handoff guide written for a non-technical founder.
Build Orchestrator
Coordinates everything. Builds nothing.
How it works
Manages the entire DevBridge pipeline. Routes work between agents in the correct order, enforces pipeline dependencies, monitors agent progress and timeouts, handles retries on transient failures, and ensures the right outputs flow into the right inputs at each step. NEXUS orchestrates — it doesn't build.
What it can't do
Make judgment calls when agents encounter genuinely novel problems or business-critical ambiguities. NEXUS recognizes these situations and escalates to Kamrul with full context rather than guessing forward.
Who reviews the output
Kamrul Hasan monitors NEXUS orchestration logs for all active projects and reviews every escalation.
Sample output
Pipeline run completed in 4h 23m. 3 inter-agent handoffs. 1 HAMMER timeout — retried successfully after 12-minute delay. SENTINEL flagged 2 high-severity issues — pipeline paused for Kamrul review. All stages resolved to green.
From SocioFi Guild. Their job is to review agent output — not to do the work themselves.
Coming — Studio
Lead Software Architect
Coming — Services
Site Reliability Engineer
Coming — Labs
Research Engineer
Coming — Products
Product Engineer
Coming — Academy
Curriculum Architect
Coming — Ventures
Technical Due Diligence Lead
Coming — Cloud
Infrastructure Engineer
Coming — Technology
Chief of Staff (AI Operations)
Supervisors will be hired from SocioFi Guild — a curated network of specialist engineers with demonstrated technical competence and experience working alongside AI systems. Apply →
| Division | SCOUT | HUNTER | ATLAS | MIRROR | FORGE | HAMMER | SENTINEL | SHIELD | BEACON | NEXUS |
|---|---|---|---|---|---|---|---|---|---|---|
| Studio | ||||||||||
| Services | ||||||||||
| Labs | ||||||||||
| Products | ||||||||||
| Cloud |