Skip to content

raydac/jcp-ai

Repository files navigation

JCP-AI Project logo
License Apache 2.0 Java 17+ Maven 3.8+
Maven central Maven central Maven central
Arthur's acres sanctuary donation

Changelog

1.0.3 (13-sep-2025)

  • improved code extraction from model answer
  • jcp-ai-openai uses as base library com.openai:openai-java:3.5.2
  • jcp-ai-anthropic uses as base library com.anthropic:anthropic-java:2.7.0
  • jcp-ai-gemini uses as base library com.google.genai:google-genai:1.16.0

1.0.2 (13-aug-2025)

  • fixed truncation for prompt cache files
  • jcp-ai-anthropic uses as base library com.anthropic:anthropic-java:2.5.0
  • jcp-ai-gemini uses as base library com.google.genai:google-genai:1.11.0
  • jcp-ai-openai uses as base library com.openai:openai-java:3.0.2

Full changelog

Pre-word

A long time ago, I created one of the first Java preprocessors (called JCP) to make building projects easier. The preprocessor's business is to read and change the program text. LLMs also work by generating text based on given input, so combining them with a preprocessor is a logical step.

The JCP preprocessor allows you to keep blocks of text in comments, and starting from version 7.2.0, it can send them to external services for processing. This gave me the idea to connect it with an LLM, so the result from the LLM could be inserted directly into the program code (with minor normalizations).

Since the preprocessor can work with Maven, Gradle, and Ant, the ability to use LLMs automatically becomes available for these build tools as well.

How it works?

JCP-AI is a set of extension libraries that provide specialized services capable of calling external LLMs to process text. I’ve added support for LLMs that have official open-source Java clients.
Currently, it provides connectors for:

Sequence diagram

The preprocessor discovers JCP-AI processors through Java's service registration mechanism, so it's enough for them to appear in its classpath for them to become automatically available. For better flexibility and compatibility, JCP-AI client libraries don’t include any client code themselves; instead, they rely on a client library already present in the classpath.

Inject prompts into sources

Prompts are written in the source code as single-line comments using //$ or //$$, with the prefix AI> for single-line prompts, or """AI> for multi-line prompts.

To recognize """ as a single text block in JCP 7.2.1, the allowBlocks flag must be set to true.

JCP also supports the //#- and //#+ directives to control whether lines are included in the final output file:

  • //#- disables output (lines are excluded).
  • //#+ enables output again.

This combination of directives allows method stubs to be excluded from the final generated code.

Below is an example of how to define a prompt to generate a method, and then replace that prompt with the result from a language model (LLM).

//$"""AI> code level is Java /*$mvn.project.property.maven.compiler.release$*/
//$"""AI> generate method implements fastest sort algorithm with minimal memory overhead, the speed is priority:
//$"""AI>     public static int [] fastSort(final int [] array, final boolean asc)
//$"""AI> where arguments are
//$"""AI>   int [] array is array to be sorted
//$"""AI>   asc is flag shows if true then ascending order for result, descending order otherwise
//$"""AI> it returns the same incoming array if it is null, empty or single value array, else returns new version of array with sorted values.
//$"""AI> the method should contain whole implementation of sort algorithm without any use of third side libraries, helpers and utility classes
//$"""AI> can't have additional methods and functions, all implementation must be as the single method
//$"""AI>
//$"""AI> special requirements and restrictions:
//$"""AI> 1. the method has javadoc header description
//$"""AI> 2. the method doesn't contain any internal method comment, only lines of code
//$"""AI> 3. don't use both single line comments and block comments inside the method code
//$"""AI> 4. if any import needed then use canonical class name and don't add import section
//$"""AI> 5. it is only method, must not have any class wrapping
//#-
public static int[] fastSort(final int[] array, final boolean asc) {
  throw new UnsupportedOperationException("not generated");
}
//#+

All sequent lines marked as //$"""AI> will be recognized as single prompt, they will be accumulated as text block and provided to JCP-AI for processing. After processing, the result will fully replace the prompt text. The result sources can be found in the maven project folder by path target/generated-sources/preprocessed.

JCP-AI tuning

Requests to LLMs are not cheap, so I have provided way to cache their responses. We can provide JCP global variable jcpai.prompt.cache.file with path to caching file through preprocessor config and JCP-AI starts save gotten prompts in the defined file as JSON. During every call it will be looking for already presented response for a prompt in the cache and inject existing cached text if it is presented.

JCP-AI parameters

All parameters of JCP-AI can be provided as local or global variables of JCP, in the plugin it is the var config section.

Common variables

JCP-AI provides set of common parameters for all connectors:

  • jcpai.prompt.cache.file - path to a cache file which contains prompt results in JSON format
  • jcpai.prompt.only.processor - if multiple JCP-AI connectors detected as services then all they will be called for same prompt and their result will be accumulated, but this parameter allows to specify only connector which will be called in the case if needed.
  • jcpai.prompt.temperature - float value to define temperature for LLM process
  • jcpai.prompt.timeout.ms - integer number of milliseconds for requests timeout, it will be provided directly to the calling REST client and its scope of responsibility
  • jcpai.prompt.top.p - TopP parameter for LLM process if client supports it
  • jcpai.prompt.top.k - TopK parameter for LLM process if client supports it
  • jcpai.prompt.seed - Seed parameter for LLM process if client supports it
  • jcpai.prompt.max.tokens - limit number for output tokens for LLM process if client supports it
  • jcpai.prompt.instruction.system - text to be sent as system instruction with prompt, if not defined then default one will be sent
  • jcpai.prompt.distillate.response - boolean flat to make distillation of LLM response and remove parenthesis and extract markdown section (default true)
  • jcpai.prompt.cache.file.gc.threshold - threshold for deleting cached responses if they have not been used for a certain number of builds (default 15)

How to build?

It can be built by Maven with just mvn clean install in the root project folder.
If you want build and start tests then you can use the it profile and the command line will be mvn clean install -Pit but secret_properties.properties files should be prepared in test projects to provide model parameters.

Tuning of build systems

Gradle

For Gradle you should improve your gradle.build to load and include JCP, JCP-AI and a LLM client library into class path during preprocessing.

Make build.gradle

In Gradle 9 the build script may look like the Gradle build script for the test

buildscript {
  repositories {
    mavenLocal()
    mavenCentral()
  }
  dependencies {
    classpath "com.igormaznitsa:jcp:7.2.1"
      classpath "com.igormaznitsa:jcp-ai-gemini:1.0.3"
      classpath "com.google.genai:google-genai:1.16.0"
  }
}

apply plugin: 'java'
apply plugin: 'application'
apply plugin: 'com.igormaznitsa.jcp'

repositories {
  mavenLocal()
  mavenCentral()
}

dependencies {
  testImplementation 'org.junit.jupiter:junit-jupiter-api:5.13.4'
  testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.13.4'
  testRuntimeOnly 'org.junit.platform:junit-platform-launcher:1.13.4'
}

test {
  useJUnitPlatform()
}

java {
  sourceCompatibility = JavaVersion.VERSION_17
  targetCompatibility = JavaVersion.VERSION_17
}

def propsFile = file("secret_properties.properties")
def configProps = new Properties()
propsFile.withInputStream { configProps.load(it) }

preprocess {
  sources = sourceSets.main.java.srcDirs
  allowBlocks = true
  preserveIndents = true
  keepComments = 'remove_jcp_only'
  vars = [
          'jcpai.gemini.model'     : "${configProps['jcpai.gemini.model']}",
          'jcpai.gemini.api.key'   : "${configProps['jcpai.gemini.api.key']}",
          'jcpai.prompt.cache.file': "${project.projectDir}/jcp_ai_gemini_cache.json",
          'java.release'           : 17
  ]
}

task(changeSourceFolder) {
  sourceSets.main.java.srcDirs = [preprocess.target]
}.dependsOn preprocess

compileJava.dependsOn preprocess

Maven

Let's take a look at a small example, how to inject a bit AI into a Maven project and get some its profit during build.

Tuning of pom.xml

As the first step, we should tune the project pom.xml, inject JCP into build process and include JCP-AI. Let's use Gemini AI as target LLM. The build section in the case should look like the snippet below:

<build>
    <plugins>
        <plugin>
            <groupId>com.igormaznitsa</groupId>
            <artifactId>jcp</artifactId>
            <version>7.2.1</version>
            <executions>
                <execution>
                    <id>preprocessSources</id>
                    <phase>generate-sources</phase>
                    <goals>
                        <goal>preprocess</goal>
                    </goals>
                    <configuration>
                        <allowBlocks>true</allowBlocks>
                        <preserveIndents>true</preserveIndents>
                        <vars>
                            <jcpai.gemini.model>${jcpai.gemini.model}</jcpai.gemini.model>
                            <jcpai.gemini.api.key>${jcpai.gemini.api.key}</jcpai.gemini.api.key>
                            <jcpai.prompt.cache.file>${jcpai.prompt.cache.file}</jcpai.prompt.cache.file>
                        </vars>
                    </configuration>
                </execution>
            </executions>
            <dependencies>
              <dependency>
                <groupId>com.igormaznitsa</groupId>
                <artifactId>jcp-ai-gemini</artifactId>
                  <version>1.0.3</version>
              </dependency>
              <dependency>
                <groupId>com.google.genai</groupId>
                <artifactId>google-genai</artifactId>
                  <version>1.16.0</version>
              </dependency>
            </dependencies>
        </plugin>
    </plugins>
</build>

Through the dependency section of the JCP plugin, we inject JCP-AI GeminiAI connector and its official REST client library. I specially don't include dependencies to clients into JCP-AI connectors to change easily their version and don't keep dependency hard link between dependencies.

About

Connectors for Java Comment Preprocessor (JCP) to work with LLM clients

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages