This document provides a structured view of the technologies, tools, platforms, techniques, and frameworks that Bayat recommends for projects. The Tech Radar helps teams make technology choices by clearly indicating which technologies are preferred, which are being evaluated, and which should be avoided.
- Introduction
- How to Use the Tech Radar
- Radar Rings
- Quadrants
- Current Radar
- Technology Assessment Process
- Assessment Metrics
- Organizational Adoption Strategy
- Radar Visualization and Maintenance
- Adding New Technologies
- Technology Lifecycle Management
The Tech Radar is a tool to inspire and support teams to pick the best technologies for new projects; it provides a platform to share knowledge and experience in technologies, to reflect on technology decisions and continuously evolve our technology landscape.
The Tech Radar should be consulted when:
- Starting a new project
- Evaluating technologies for an existing project
- Planning technology upgrades or migrations
- Making strategic technical decisions
The Tech Radar is not:
- A mandate or strict policy
- A comprehensive list of all technologies used at Bayat
- A replacement for detailed evaluation in your specific context
Technologies on the radar are positioned in one of four rings:
Definition: Technologies we have high confidence in and recommend using.
Characteristics:
- Proven in production at Bayat
- Well-understood
- Mature ecosystem
- Strong community support
- Aligned with our strategic direction
Implications:
- Default choice for new projects
- Actively recommended
- Well-supported by internal resources and documentation
Definition: Technologies worth pursuing, but not yet fully proven within Bayat.
Characteristics:
- Shows clear benefits over existing solutions
- Successfully implemented in at least one project
- Some internal expertise exists
- Reasonable maturity and community support
Implications:
- Recommended for non-critical projects or components
- Requires monitoring and evaluation
- Knowledge sharing is expected from teams using these technologies
Definition: Technologies worth exploring with a low-risk approach.
Characteristics:
- Potentially valuable but unproven
- Limited internal experience
- May be emerging or niche
- Requires further evaluation
Implications:
- Suitable for proof-of-concepts or isolated components
- Requires explicit justification and risk assessment
- Requires clear learning and evaluation plan
Definition: Technologies that should be avoided for new projects.
Characteristics:
- Legacy technologies being phased out
- Technologies that didn't meet expectations in trials
- Technologies with significant known issues
- Declining industry support or community
- No longer aligned with our strategic direction
Implications:
- Not recommended for new projects
- Existing usage should have migration plans
- Requires exception approval for new usage
The radar is divided into four quadrants:
Programming languages, frameworks, and major libraries that form the foundation of our applications.
Examples: JavaScript, TypeScript, React, Angular, .NET Core, Django, Spring Boot, Unity, Unreal Engine
Development, testing, and operational tools that support the software development lifecycle.
Examples: Git, Docker, Kubernetes, Jenkins, GitHub Actions, Jira, Figma, VS Code, JetBrains IDEs
Environments where we run our software, including cloud providers, databases, and middleware.
Examples: AWS, Azure, GCP, PostgreSQL, MongoDB, Redis, Kafka, RabbitMQ, Elasticsearch
Methods, approaches, and practices that guide how we build software.
Examples: Microservices, Event-driven architecture, DevOps, TDD, BDD, Domain-driven design
Note: This section should be updated quarterly. Last updated: [DATE].
- TypeScript
- React
- .NET Core
- Python
- Swift
- Kotlin
- Unity
- Rust
- Flutter
- SwiftUI
- Vue.js
- Go
- WebAssembly
- Svelte
- Kotlin Multiplatform
- Jetpack Compose
- Deno
- AngularJS (v1)
- jQuery
- PHP (older versions)
- Java (versions < 11)
- .NET Framework (non-Core)
- GitHub/GitLab
- VS Code
- Docker
- Kubernetes
- Terraform
- GitHub Actions/Azure DevOps
- Jest
- Cypress
- Figma
- ArgoCD
- Playwright
- Pulumi
- Grafana
- Prometheus
- GitHub Copilot
- Nx
- Backstage
- OpenTelemetry
- k6
- Jenkins (pipeline-as-code is acceptable)
- Selenium
- Chef/Puppet
- Travis CI
- AWS
- Azure
- PostgreSQL
- Redis
- MongoDB
- Elasticsearch
- Kafka
- GCP
- Snowflake
- DataDog
- Vercel
- Cloudflare Workers
- AWS Amplify
- Supabase
- PlanetScale
- Edge computing platforms
- AWS AppSync
- Self-hosted infrastructure (except special cases)
- Oracle DB
- MSSQL (for new projects)
- Heroku
- DevOps
- Infrastructure as Code
- Microservices (when appropriate)
- API-first design
- Feature flags
- TDD/BDD
- Observability
- Continuous Deployment
- Event-driven architecture
- GraphQL
- Service meshes
- GitOps
- Design systems
- Micro-frontends
- WASM for edge computing
- FinOps
- Platform engineering
- AI/ML-driven development
- Monolithic architectures (for large projects)
- Waterfall development
- Manual deployment processes
- Shared databases between services
When assessing new technologies, consider:
-
Strategic Alignment
- How well does it align with our technical strategy?
- Does it support our business objectives?
- How does it enhance our competitive advantage?
- Does it enable future business capabilities?
-
Technical Capability
- Does it solve the problem effectively?
- How does it compare to alternatives?
- What are its performance characteristics?
- How well does it handle scale and complexity?
- What technical limitations might affect us?
-
Operational Impact
- How mature and stable is it?
- What are the security implications?
- How well does it integrate with our existing stack?
- What is the operational overhead?
- How does it affect our observability practices?
- What's the disaster recovery approach?
-
Community & Support
- How active is the community?
- Is there commercial support if needed?
- How well is it documented?
- What is the release cadence and roadmap?
- Are there known vulnerabilities or pending security issues?
- How responsive is the project to bugs and issues?
-
Team Capability
- Do we have the skills to implement and maintain it?
- What is the learning curve?
- Will it be appealing for recruitment?
- How does it affect our hiring strategy?
- What training is required?
- Initial Research: Gather information about the technology
- Proof of Concept: Test in a controlled environment
- Pilot Implementation: Use in a non-critical project
- Evaluation: Review the results against criteria
- Decision: Place on the radar in the appropriate ring
To ensure consistent technology evaluation, we use both quantitative and qualitative metrics:
-
Development Velocity
- Time to implement standard features
- Build/compile time
- Local development setup time
- Code volume for common patterns
-
Learning Curve
- Time to onboard new developers
- Quality of documentation
- Availability of training resources
- Complexity of concepts
-
Code Quality
- Static analysis support
- Testing framework maturity
- Type safety (when applicable)
- Maintainability metrics
-
Performance
- Response time under various loads
- Resource utilization (CPU, memory, storage)
- Scalability characteristics
- Startup time
-
Reliability
- Failure rates
- Recovery mechanisms
- Resilience patterns support
- Stability under stress
-
Security
- Known vulnerability count
- Security feature set
- Update frequency for security issues
- Authentication/authorization capabilities
- Compliance with security standards
-
Cost Factors
- Licensing costs
- Infrastructure requirements
- Development time
- Operational overhead
- Support costs
-
Time to Market
- Development time reduction
- Deployment efficiency
- Integration capabilities
- Reusability of components
-
Business Capabilities
- Features enabled
- Competitive advantages created
- User experience improvements
- Business process optimizations
Successfully integrating new technologies requires a structured approach:
For each technology in the Adopt and Trial rings:
-
Identify Champions
- Designate 2-3 experts per technology
- Ensure cross-team representation
- Allocate dedicated learning time
-
Champion Responsibilities
- Maintain internal documentation
- Review implementation approaches
- Provide consultation to teams
- Track technology updates
- Lead training sessions
-
Documentation
- Internal wikis for each Adopt technology
- Best practices guides
- Known pitfalls and workarounds
- Architecture patterns
- Integration examples
-
Community Building
- Regular tech talks (monthly)
- Community of practice meetings
- Discussion channels
- Code review expertise
- Internal newsletters
-
Training Strategy
- Onboarding pathways for Adopt technologies
- Lunch and learn sessions for Trial technologies
- Hackathon events for Assess technologies
- External training resources curation
-
Starter Kits
- Project templates
- Boilerplate code
- Configuration examples
- CI/CD templates
-
Migration Patterns
- Incremental migration guides
- Interoperability examples
- Legacy system integration patterns
- Migration testing approaches
-
Technical Support
- Dedicated support channels
- Troubleshooting guides
- Performance optimization tips
- Integration support
-
Interactive Web Radar
- Clickable, dynamic visualization
- Technology details on demand
- Filtering by quadrant or ring
- Historical view of movement
-
Regular Reports
- Quarterly PDF snapshots
- Technology movement highlights
- New additions and rationale
- Upcoming evaluations
-
Integration with Internal Tools
- Project inception checklists
- Architecture review tools
- Developer portals
- Knowledge base systems
-
Regular Reviews
- Quarterly radar updates
- Annual comprehensive review
- On-demand updates for strategic technologies
-
Governance
- Technology Review Board oversight
- Approval workflow for ring changes
- Exception management process
- Strategic alignment verification
-
Feedback Mechanisms
- User surveys on technology experiences
- Implementation retrospectives
- Operational metrics tracking
- Community input channels
To propose adding a new technology to the radar:
-
Create a Proposal that includes:
- Technology name and description
- Value proposition
- Use cases at Bayat
- Comparison with existing technologies
- Risk assessment
- Learning resources
-
Submit for Review to the Architecture Review Board
-
Review Process:
- Technical evaluation
- Security review
- Strategic alignment check
- Final decision on radar placement
-
Communication:
- Update the Tech Radar
- Announce the change
- Provide implementation guidance
When technologies move to the Hold ring, we implement a structured lifecycle management approach:
-
Deprecation Announcement
- Clear communication to all teams
- Rationale for the decision
- Timeline for support reduction
- Migration recommendations
-
Support Reduction Timeline
- Phase 1: New projects discouraged (months 0-6)
- Phase 2: New projects prohibited (months 6-12)
- Phase 3: Support limited to critical fixes (months 12-24)
- Phase 4: No support (after month 24)
-
Migration Support
- Migration guides for recommended alternatives
- Transition patterns and case studies
- Code migration tools when applicable
- Technical support for migration challenges
In some cases, continued use of Hold technologies may be necessary:
-
Exception Request Requirements
- Business justification
- Risk assessment
- Containment strategy
- Long-term plan for replacement
- Executive sponsor
-
Approval Process
- Architecture review
- Security assessment
- Operational risk evaluation
- Time-bound approval with review dates
-
Management of Exceptions
- Quarterly review of all exceptions
- Renewal requirements
- Tracking of exception inventory
- Mitigation plan updates
For strategic modernization of systems using Hold technologies:
-
Assessment Approach
- Technical debt quantification
- Business impact analysis
- Modernization options evaluation
- Cost-benefit analysis
-
Modernization Patterns
- Strangler fig pattern
- Parallel implementation
- Incremental replacement
- Rebuild vs. refactor decision framework
-
Modernization Governance
- Progress tracking
- Risk management
- Knowledge transfer
- Operational handover