Welcome to the 2026 Rebuilt Example Code. The goal of this repository is to explore how a minimally adapted kitbot robot could be programmed using the Phoenix software suite and commandbased best practices. This robot builds on top of the CTR Electronics SwerveWithPathPlanner example base and implements the following features.
- Generated swerve code with Phoenix Tuner X.
- Generated intake and flywheel code with Phoenix Tuner X (coming soon).
- Robot localization using April Tags (note that these should be adjusted to your individual camera setup).
- Hoot replay for log playback.
- Mechanism2D implementations for the flywheel and intake.
While the primary goal of this repository is to provide an example of how to implement the kitbot mechanisms on a swerve chassis, integration is quite easy. These instructions assume the user has updated their device firmware, installed the latest 2026 of FRC game tools, and imaged their roboRIO to 2026. Additionally, this project assumes the user is utilizing Kraken X60 motors on mechanisms that would otherwise use a CIM + Spark Max. General kitbot code for CIM + Spark Max can be found on the FIRST resource site.
Constants for each subsystem motors are stored in each subsystem. There are three core subsystems that you need to adapt for your configuration:
- Swerve - Replace the
TunerConstants.javain this project with one generated by Tuner X specific to your robot. - Flywheel - Single motor, tuned for the kitbot flywheel. If you have modified the kitbot flywheel, you may need to adjust the gain constants for your robot. Otherwise, you just need to modify the device ID.
- Intake - Single motor, tuned for the kitbot intake and indexer. If you have modified the kitbot intake/indexer, you may need to adjust the gain constants for your robot. Otherwise, you just need to modify the device ID.
The three subsystems compose the primary functionality on the kitbot. This example expands upon the initial kitbot by integrating vision localization and a basic autonomous pipeline.
RobotContainer.java begins with initializing our subsystems and utility classes. This is mostly identical to the SwerveWithPathPlanner project, apart from SysID has been removed (to clean up the project a bit) and with the new subsystem and utility classes.
Named commands, which are necessary for PathPlanner autonomous routines, are initialized in the constructor. Notice that the commands are not defined in explicit command java files. The FRC software ecosystem has moved to inline commands. Commands that are constructed by daisy-chaining using the command decorators provided by WPILib.
NamedCommands.registerCommand("Stop Shooting", flywheel.coastFlywheel().alongWith(intake.coastIntake()));
/* Shoot commands need a bit of time to spool up the flywheel before feeding with the intake */
NamedCommands.registerCommand("Shoot Near", flywheel.setTarget(() -> FlywheelSetpoint.Near)
.alongWith(Commands.waitUntil(isFlywheelReadyToShoot).andThen(intake.setTarget(() ->IntakeSetpoint.FeedToShoot))));
NamedCommands.registerCommand("Shoot Far", flywheel.setTarget(() -> FlywheelSetpoint.Far)
.alongWith(Commands.waitUntil(isFlywheelReadyToShoot).andThen(intake.setTarget(() ->IntakeSetpoint.FeedToShoot))));
NamedCommands.registerCommand("Stop Intake", intake.coastIntake().alongWith(flywheel.coastFlywheel()));
NamedCommands.registerCommand("Intake Fuel", intake.setTarget(() -> IntakeSetpoint.Intake).alongWith(flywheel.setTarget(()-> FlywheelSetpoint.Intake)));
NamedCommands.registerCommand("Outtake Fuel", intake.setTarget(() -> IntakeSetpoint.Outtake).alongWith(flywheel.setTarget(()-> FlywheelSetpoint.Outtake)));
autoChooser = AutoBuilder.buildAutoChooser("Only Score");
SmartDashboard.putData("Auto Mode", autoChooser);
configureBindings();
// Warmup PathPlanner to avoid Java pauses
CommandScheduler.getInstance().schedule(FollowPathCommand.warmupCommand());The rest of RobotContainer is just plumbing what the various buttons on the joystick do. For convenience, they are listed here (assuming Xbox gamepad).
Left Y- Forward and Backwards Translation (remember field oriented).Left X- Left and Right Translation (remember field oriented).Right X- Rotational Velocity or Rotational "power".A button- Lock the modules into an X and apply a braking force.B button- Allow user translation but orient robot toward to hub.D-pad povUp- Drive forward in a straight line.D-pad povDown- Drive backwards in a straight line.Start button- Re-seed our pose (pressing this would make it so the current orientation is "forwards").Left Bumper- Intake FUEL.Left Trigger- Eject FUEL (in case there is a jam).Right Bumper- Spin the flywheel up to speed and begin shooting for ourNEARposition.Right Trigger- Spin the flywheel up to sped and begin shooting for ourFARposition.
Important
This documentation assumes the user has configured their coprocessor running PHotonVision correctly.
Vision samples are provided by PhotonVision, a free to use, community developed vision solution that runs on Linux hardware.
A utility class called PhotonVisionSystem runs a periodic routine (this is a utility class instead of a subsystem, as it doesn't implement any commands or functionality that necessitates it being a subsystem) that integrates PhotonVision pose estimates into odometry and provides a few utility features.
There are two goals that the vision system is accomplishing.
- Grab the best pose from the vision samples we get and fuse it into our odometry.
- Get the best hub target and use it as a reference for the
targetHubswerve request.
The PhotonVision API makes it easy to accomplish goal 1. We begin with constructing a PhotonPoseEstimator using the default field tag layout (which is the current field each year, assuming your PhotonLib is up-to-date).
Important
robotToCamera will need to be adjusted to your camera layout. See PhotonVision documentation for more information.
final AprilTagFieldLayout TagLayout = AprilTagFieldLayout.loadField(AprilTagFields.kDefaultField);
Transform3d robotToCamera = new Transform3d(
/* X, Y, Z */
new Translation3d(Meters.of(-0.5), Meters.of(0), Meters.of(0.5)),
/* Roll, Pitch, Yaw */
/* A pitch of -40 degrees appears to see the apriltags */
new Rotation3d(Degrees.of(0), Degrees.of(-40), Degrees.of(180)));
PhotonPoseEstimator estimator = new PhotonPoseEstimator(TagLayout, robotToCamera);We iterate through each result, using the estimator to calculate a good estimate. We first opt to use the estimateCoprocMultiTagPose to calculate a pose estimate based on multiple targets in a capture. This is ideal as the more targets integrated into our capture, the more accurate the pose should be.
Pose estimates returned by the PhotonLib API can be invalid. In the case that our initial estimate is invalid, we try to fall back to the lowest ambiguity pose available. Finally, the estimate is added do an estimates collection defined at the top of the file.
var allResults = targetCamera.getAllUnreadResults();
var estimates = new ArrayList<LoggableRobotPose>();
for (var result : allResults) {
var estimate = estimator.estimateCoprocMultiTagPose(result);
if (estimate.isEmpty()) {
estimate = estimator.estimateLowestAmbiguityPose(result);
}
estimate.ifPresent(val -> estimates.add(new LoggableRobotPose(val.estimatedPose, val.timestampSeconds)));
}Now that we have captured all of the pose estimates, integrating them into our robot odometry is as easy as running the consumePhotonVisionMeasurement function. In the below example, poseConsumer is a Consumer that will call the consumePhotonVisionMeasurement function when accept is run. This allows us to setup logging on our vision estimates as they come in.
/* And process every pose we got */
for(LoggableRobotPose pose : allPoses) {
poseConsumer.accept(pose);
}And that's it! Users will likely want to modify the consumePhotonVisionMeasurement to adjust their stddevs (think of this as an error metric on whether to trust vision or odometry more). Information on tuning the pose estimator can be found here.
As mentioned previously, a hubTarget swerve request is defined that will apply a heading target that will ensure the robot is rotationally aligned with the HUB. The heading the robot needs to maintain and the location of the HUB is calculated from the Photon Vision results. We have a couple of constants we need to setup first.
Important
We calculate the HUB location from pure vision data to ensure that our targetting strategy is robust. Odometry data can be unreliable at times (especially after crossing a bump and waiting on vision data to eventually reseed our odometry over time).
First, we define the april tag IDs that we care about. Any of these IDs are april tags we can utilize for alignment to the HUB.
Second, we define the location of the red HUB and blue HUB location relative to the specified april tag IDs. This is the actual "target" that will be used to align against.
/* IDs 3,4 and 19,20 are on the side of the hub that we can't shoot from, so don't include them */
private final int[] RedHubApriltagIds = new int[]{
2, /* 3, 4, */ 5, 8, 9, 10, 11
};
private final int[] BlueHubApriltagIds = new int[]{
18, /* 19, 20, */ 21, 24, 25, 26, 27
};
private final ApriltagTarget RedHub = new ApriltagTarget(Map.of(
2, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
5, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
8, new Translation3d(Inches.of(-23.5), Inches.of(-14), Inches.of(33)),
9, new Translation3d(Inches.of(-23.5), Inches.of(14), Inches.of(33)),
10, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
11, new Translation3d(Inches.of(-23.5), Inches.of(14), Inches.of(33))
));
private final ApriltagTarget BlueHub = new ApriltagTarget(Map.of(
18, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
21, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
24, new Translation3d(Inches.of(-23.5), Inches.of(-14), Inches.of(33)),
25, new Translation3d(Inches.of(-23.5), Inches.of(14), Inches.of(33)),
26, new Translation3d(Inches.of(-23.5), Inches.of(0), Inches.of(33)),
27, new Translation3d(Inches.of(-23.5), Inches.of(14), Inches.of(33))
));Now, we can begin our calculations to determine what the robot heading should be. Remember that the goal is to align to the HUB, not necessarily to a given april tag (which would otherwise be much simpler). We begin with by determining what our alliance is and what hubTargetIds to utilize.
Alliance currentAlliance = DriverStation.getAlliance().orElse(Alliance.Red);
int[] hubTargetIds;
/* Figure out if we should use red alliance hub ids or blue alliance hub ids */
if (currentAlliance == Alliance.Red) {
hubTargetIds = RedHubApriltagIds;
} else {
hubTargetIds = BlueHubApriltagIds;
}Now iterate through each result and pick the best april tag based on poseAmbiguity.
for (var result : allResults) {
var allTargets = result.getTargets();
/* Pick the target with the lowest ambiguity */
PhotonTrackedTarget bestTarget = null;
for (PhotonTrackedTarget target : allTargets) {
/* Check that the apriltag id is a hub ID */
if (Arrays.stream(hubTargetIds).anyMatch(x -> x == target.fiducialId)) {
/* If we've never assigned the best target, use this one */
if (bestTarget == null) {
bestTarget = target;
}
/* Otherwise only update the target if this is a better ambiguity */
else if (target.poseAmbiguity < bestTarget.poseAmbiguity && target.poseAmbiguity > 0) {
bestTarget = target;
}
}
}
...
}We first grab the location of the tag relative to the robot by utilizing bestCameraToTarget member on the target we previously cached. Then we cache the relative position of the HUB based on the april tag ID that was previously cached.
lastTrackedHubTarget = bestTarget;
timeOfLastTrackedHubTarget = Utils.getCurrentTimeSeconds();
Transform3d tagRelativeToRobot = lastTrackedHubTarget.bestCameraToTarget;
var transformToHub = currentAlliance == Alliance.Red ? RedHub.getHubPose(lastTrackedHubTarget.fiducialId) :
BlueHub.getHubPose(lastTrackedHubTarget.fiducialId);Cache the current robot pose. Then perform a series of transformations to calculate the field-oriented (absolute) position of the HUB. Breaking this into pieces, we first begin with our absolutely positioned robot pose.
new Pose3d(robotPose)- turn our 2d robot pose into a 3d robot pose.transformBy(robotToCamera)- now we have the absolute position of the camera on the field.transformBy(tagRelativeToRobot)- now we have the absolute position of the april tag on the field.transformBy(transformToHub)now we have the absolute position of the hub (success!)
var robotPose = currentRobotPose.get();
hubTarget = new Pose3d(robotPose).transformBy(robotToCamera).transformBy(tagRelativeToRobot).transformBy(transformToHub);Now that we know the absolute position of the hub, we grab the relative position of the HUB. This relativeTo call is effectively grabbing the position of the HUB as if the robot was the origin.
Finally, we need to calculate what the heading of the HUB is relative to the robot. This is done by just adding the absolute HUB rotation to our absolute robot rotation.
var hubRelativeToRobot = hubTarget.relativeTo(new Pose3d(robotPose));
hubHeading = robotPose.getRotation().plus(hubRelativeToRobot.getTranslation().toTranslation2d().getAngle()
.plus(currentAlliance == Alliance.Blue ? Rotation2d.k180deg : Rotation2d.kZero));Then we offset based on whether we are on the red alliance or blue alliance. The output of hubHeading is then used (via the getHeadingToHubFieldRelative() function) in our aim toward HUB logic.
/* Use the hub target to determine where to aim */
return targetHub.withTargetDirection(vision.getHeadingToHubFieldRelative())
.withVelocityX(-joystick.getLeftY() * MaxSpeed) // Drive forward with negative Y (forward)
.withVelocityY(-joystick.getLeftX() * MaxSpeed); // Drive left with negative X (left)This project is based on the CTR Electronics PathPlanner example project. Autonomous routines are created and configured through the PathPlanner GUI interface. These autonomous routines integrate with named commands that are defined in RobotContainer.java. Two autonomous routines are created in this project.
- Only Score - This routine simply keeps the robot parked in front of the HUB as it runs the flywheel and intake to shoot FUEL.
- Score and Collect - This routine keeps the robot parked in front of the HUB as it shoots FUEL, then drives to the DEPOT to collect fuel. Once fuel has been collected, drive to the HUB to shoot until end of autonomous.

