Skip to content

Conversation

@github-actions
Copy link
Contributor

This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.

Releases

@effect/ai@0.18.0

Minor Changes

  • #4891 0552674 Thanks @IMax153! - Make AiModel a plain Layer and remove AiPlan in favor of ExecutionPlan

    This release substantially simplifies and improves the ergonomics of using AiModel for various providers. With these changes, an AiModel now returns a plain Layer which can be used to provide services to a program that interacts with large language models.

    Before

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces an `AiModel<AiLanguageModel, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    // Generate a dad joke
    const getDadJoke = AiLanguageModel.generateText({
      prompt: "Tell me a dad joke"
    })
    
    const program = Effect.gen(function* () {
      // Build the `AiModel` into a `Provider`
      const gpt4o = yield* Gpt4o
      // Use the built `AiModel` to run the program
      const response = yield* gpt4o.use(getDadJoke)
      // Log the response
      yield* Console.log(response.text)
    })
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(Effect.provide(OpenAi), Effect.runPromise)

    After

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces a `Layer<AiLanguageModel, never, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    const program = Effect.gen(function*() {
      // Generate a dad joke
      const response = yield* AiLanguageModel.generateText({
        prompt: "Tell me a dad joke"
      })
      // Log the response
      yield* Console.log(response.text)
    ).pipe(Effect.provide(Gpt4o))
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(
      Effect.provide(OpenAi),
      Effect.runPromise
    )

    In addition, AiModel can be yield*'ed to produce a layer with no requirements.

    This shifts the requirements of building the layer into the calling effect, which is particularly useful for creating AI-powered services.

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiLanguageModel } from "@effect/ai-openai"
    import { Effect } from "effect"
    
    class DadJokes extends Effect.Service<DadJokes>()("DadJokes", {
      effect: Effect.gen(function* () {
        // Yielding the model will return a layer with no requirements
        //
        //      ┌─── Layer<AiLanguageModel>
        //      ▼
        const model = yield* OpenAiLanguageModel.model("gpt-4o")
    
        const getDadJoke = AiLanguageModel.generateText({
          prompt: "Generate a dad joke"
        }).pipe(Effect.provide(model))
    
        return { getDadJoke } as const
      })
    }) {}
    
    // The requirements are lifted into the service constructor
    //
    //          ┌─── Layer<DadJokes, never, OpenAiClient>
    //          ▼
    DadJokes.Default

Patch Changes

@effect/ai-anthropic@0.8.0

Minor Changes

  • #4891 0552674 Thanks @IMax153! - Make AiModel a plain Layer and remove AiPlan in favor of ExecutionPlan

    This release substantially simplifies and improves the ergonomics of using AiModel for various providers. With these changes, an AiModel now returns a plain Layer which can be used to provide services to a program that interacts with large language models.

    Before

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces an `AiModel<AiLanguageModel, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    // Generate a dad joke
    const getDadJoke = AiLanguageModel.generateText({
      prompt: "Tell me a dad joke"
    })
    
    const program = Effect.gen(function* () {
      // Build the `AiModel` into a `Provider`
      const gpt4o = yield* Gpt4o
      // Use the built `AiModel` to run the program
      const response = yield* gpt4o.use(getDadJoke)
      // Log the response
      yield* Console.log(response.text)
    })
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(Effect.provide(OpenAi), Effect.runPromise)

    After

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces a `Layer<AiLanguageModel, never, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    const program = Effect.gen(function*() {
      // Generate a dad joke
      const response = yield* AiLanguageModel.generateText({
        prompt: "Tell me a dad joke"
      })
      // Log the response
      yield* Console.log(response.text)
    ).pipe(Effect.provide(Gpt4o))
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(
      Effect.provide(OpenAi),
      Effect.runPromise
    )

    In addition, AiModel can be yield*'ed to produce a layer with no requirements.

    This shifts the requirements of building the layer into the calling effect, which is particularly useful for creating AI-powered services.

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiLanguageModel } from "@effect/ai-openai"
    import { Effect } from "effect"
    
    class DadJokes extends Effect.Service<DadJokes>()("DadJokes", {
      effect: Effect.gen(function* () {
        // Yielding the model will return a layer with no requirements
        //
        //      ┌─── Layer<AiLanguageModel>
        //      ▼
        const model = yield* OpenAiLanguageModel.model("gpt-4o")
    
        const getDadJoke = AiLanguageModel.generateText({
          prompt: "Generate a dad joke"
        }).pipe(Effect.provide(model))
    
        return { getDadJoke } as const
      })
    }) {}
    
    // The requirements are lifted into the service constructor
    //
    //          ┌─── Layer<DadJokes, never, OpenAiClient>
    //          ▼
    DadJokes.Default

Patch Changes

@effect/ai-openai@0.21.0

Minor Changes

  • #4891 0552674 Thanks @IMax153! - Make AiModel a plain Layer and remove AiPlan in favor of ExecutionPlan

    This release substantially simplifies and improves the ergonomics of using AiModel for various providers. With these changes, an AiModel now returns a plain Layer which can be used to provide services to a program that interacts with large language models.

    Before

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces an `AiModel<AiLanguageModel, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    // Generate a dad joke
    const getDadJoke = AiLanguageModel.generateText({
      prompt: "Tell me a dad joke"
    })
    
    const program = Effect.gen(function* () {
      // Build the `AiModel` into a `Provider`
      const gpt4o = yield* Gpt4o
      // Use the built `AiModel` to run the program
      const response = yield* gpt4o.use(getDadJoke)
      // Log the response
      yield* Console.log(response.text)
    })
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(Effect.provide(OpenAi), Effect.runPromise)

    After

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
    import { NodeHttpClient } from "@effect/platform-node"
    import { Config, Console, Effect, Layer } from "effect"
    
    // Produces a `Layer<AiLanguageModel, never, OpenAiClient>`
    const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
    
    const program = Effect.gen(function*() {
      // Generate a dad joke
      const response = yield* AiLanguageModel.generateText({
        prompt: "Tell me a dad joke"
      })
      // Log the response
      yield* Console.log(response.text)
    ).pipe(Effect.provide(Gpt4o))
    
    const OpenAi = OpenAiClient.layerConfig({
      apiKey: Config.redacted("OPENAI_API_KEY")
    }).pipe(Layer.provide(NodeHttpClient.layerUndici))
    
    program.pipe(
      Effect.provide(OpenAi),
      Effect.runPromise
    )

    In addition, AiModel can be yield*'ed to produce a layer with no requirements.

    This shifts the requirements of building the layer into the calling effect, which is particularly useful for creating AI-powered services.

    import { AiLanguageModel } from "@effect/ai"
    import { OpenAiLanguageModel } from "@effect/ai-openai"
    import { Effect } from "effect"
    
    class DadJokes extends Effect.Service<DadJokes>()("DadJokes", {
      effect: Effect.gen(function* () {
        // Yielding the model will return a layer with no requirements
        //
        //      ┌─── Layer<AiLanguageModel>
        //      ▼
        const model = yield* OpenAiLanguageModel.model("gpt-4o")
    
        const getDadJoke = AiLanguageModel.generateText({
          prompt: "Generate a dad joke"
        }).pipe(Effect.provide(model))
    
        return { getDadJoke } as const
      })
    }) {}
    
    // The requirements are lifted into the service constructor
    //
    //          ┌─── Layer<DadJokes, never, OpenAiClient>
    //          ▼
    DadJokes.Default

Patch Changes

effect@3.16.0

Minor Changes

  • #4891 ee0bd5d Thanks @KhraksMamtsov! - Schedule.CurrentIterationMetadata has been added

    import { Effect, Schedule } from "effect"
    
    Effect.gen(function* () {
      const currentIterationMetadata = yield* Schedule.CurrentIterationMetadata
      //     ^? Schedule.IterationMetadata
    
      console.log(currentIterationMetadata)
    }).pipe(Effect.repeat(Schedule.recurs(2)))
    // {
    //   elapsed: Duration.zero,
    //   elapsedSincePrevious: Duration.zero,
    //   input: undefined,
    //   now: 0,
    //   recurrence: 0,
    //   start: 0
    // }
    // {
    //   elapsed: Duration.zero,
    //   elapsedSincePrevious: Duration.zero,
    //   input: undefined,
    //   now: 0,
    //   recurrence: 1,
    //   start: 0
    // }
    // {
    //   elapsed: Duration.zero,
    //   elapsedSincePrevious: Duration.zero,
    //   input: undefined,
    //   now: 0,
    //   recurrence: 2,
    //   start: 0
    // }
    
    Effect.gen(function* () {
      const currentIterationMetadata = yield* Schedule.CurrentIterationMetadata
    
      console.log(currentIterationMetadata)
    }).pipe(
      Effect.schedule(
        Schedule.intersect(Schedule.fibonacci("1 second"), Schedule.recurs(3))
      )
    )
    // {
    //   elapsed: Duration.zero,
    //   elapsedSincePrevious: Duration.zero,
    //   recurrence: 1,
    //   input: undefined,
    //   now: 0,
    //   start: 0
    // },
    // {
    //   elapsed: Duration.seconds(1),
    //   elapsedSincePrevious: Duration.seconds(1),
    //   recurrence: 2,
    //   input: undefined,
    //   now: 1000,
    //   start: 0
    // },
    // {
    //   elapsed: Duration.seconds(2),
    //   elapsedSincePrevious: Duration.seconds(1),
    //   recurrence: 3,
    //   input: undefined,
    //   now: 2000,
    //   start: 0
    // }
  • #4891 5189800 Thanks @vinassefranche! - Add HashMap.hasBy helper

    import { HashMap } from "effect"
    
    const hm = HashMap.make([1, "a"])
    HashMap.hasBy(hm, (value, key) => value === "a" && key === 1) // -> true
    HashMap.hasBy(hm, (value) => value === "b") // -> false
  • #4891 58bfeaa Thanks @jrudder! - Add round and sumAll to BigDecimal

  • #4891 194d748 Thanks @tim-smart! - add ExecutionPlan module

    A ExecutionPlan can be used with Effect.withExecutionPlan or Stream.withExecutionPlan, allowing you to provide different resources for each step of execution until the effect succeeds or the plan is exhausted.

    import { type AiLanguageModel } from "@effect/ai"
    import type { Layer } from "effect"
    import { Effect, ExecutionPlan, Schedule } from "effect"
    
    declare const layerBad: Layer.Layer<AiLanguageModel.AiLanguageModel>
    declare const layerGood: Layer.Layer<AiLanguageModel.AiLanguageModel>
    
    const ThePlan = ExecutionPlan.make(
      {
        // First try with the bad layer 2 times with a 3 second delay between attempts
        provide: layerBad,
        attempts: 2,
        schedule: Schedule.spaced(3000)
      },
      // Then try with the bad layer 3 times with a 1 second delay between attempts
      {
        provide: layerBad,
        attempts: 3,
        schedule: Schedule.spaced(1000)
      },
      // Finally try with the good layer.
      //
      // If `attempts` is omitted, the plan will only attempt once, unless a schedule is provided.
      {
        provide: layerGood
      }
    )
    
    declare const effect: Effect.Effect<
      void,
      never,
      AiLanguageModel.AiLanguageModel
    >
    const withPlan: Effect.Effect<void> = Effect.withExecutionPlan(
      effect,
      ThePlan
    )
  • #4891 918c9ea Thanks @thewilkybarkid! - Add Array.removeOption and Chunk.removeOption

  • #4891 9198e6f Thanks @TylorS! - Add parameter support for Effect.Service

    This allows you to pass parameters to the effect & scoped Effect.Service
    constructors, which will also be reflected in the .Default layer.

    import type { Layer } from "effect"
    import { Effect } from "effect"
    
    class NumberService extends Effect.Service<NumberService>()("NumberService", {
      // You can now pass a function to the `effect` and `scoped` constructors
      effect: Effect.fn(function* (input: number) {
        return {
          get: Effect.succeed(`The number is: ${input}`)
        } as const
      })
    }) {}
    
    // Pass the arguments to the `Default` layer
    const CoolNumberServiceLayer: Layer.Layer<NumberService> =
      NumberService.Default(6942)
  • #4891 2a370bf Thanks @vinassefranche! - Add Iterable.countBy and Array.countBy

    import { Array, Iterable } from "effect"
    
    const resultArray = Array.countBy([1, 2, 3, 4, 5], (n) => n % 2 === 0)
    console.log(resultArray) // 2
    
    const resultIterable = resultIterable.countBy(
      [1, 2, 3, 4, 5],
      (n) => n % 2 === 0
    )
    console.log(resultIterable) // 2
  • #4891 58ccb91 Thanks @KhraksMamtsov! - The Config.port and Config.branded functions have been added.

    import { Brand, Config } from "effect"
    
    type DbPort = Brand.Branded<number, "DbPort">
    const DbPort = Brand.nominal<DbPort>()
    
    const dbPort: Config.Config<DbPort> = Config.branded(
      Config.port("DB_PORT"),
      DbPort
    )
    import { Brand, Config } from "effect"
    
    type Port = Brand.Branded<number, "Port">
    const Port = Brand.refined<Port>(
      (num) =>
        !Number.isNaN(num) && Number.isInteger(num) && num >= 1 && num <= 65535,
      (n) => Brand.error(`Expected ${n} to be an TCP port`)
    )
    
    const dbPort: Config.Config<Port> = Config.number("DB_PORT").pipe(
      Config.branded(Port)
    )
  • #4891 fd47834 Thanks @tim-smart! - return a proxy Layer from LayerMap service

    The new usage is:

    import { NodeRuntime } from "@effect/platform-node"
    import { Context, Effect, FiberRef, Layer, LayerMap } from "effect"
    
    class Greeter extends Context.Tag("Greeter")<
      Greeter,
      {
        greet: Effect.Effect<string>
      }
    >() {}
    
    // create a service that wraps a LayerMap
    class GreeterMap extends LayerMap.Service<GreeterMap>()("GreeterMap", {
      // define the lookup function for the layer map
      //
      // The returned Layer will be used to provide the Greeter service for the
      // given name.
      lookup: (name: string) =>
        Layer.succeed(Greeter, {
          greet: Effect.succeed(`Hello, ${name}!`)
        }),
    
      // If a layer is not used for a certain amount of time, it can be removed
      idleTimeToLive: "5 seconds",
    
      // Supply the dependencies for the layers in the LayerMap
      dependencies: []
    }) {}
    
    // usage
    const program: Effect.Effect<void, never, GreeterMap> = Effect.gen(
      function* () {
        // access and use the Greeter service
        const greeter = yield* Greeter
        yield* Effect.log(yield* greeter.greet)
      }
    ).pipe(
      // use the GreeterMap service to provide a variant of the Greeter service
      Effect.provide(GreeterMap.get("John"))
    )
    
    // run the program
    program.pipe(Effect.provide(GreeterMap.Default), NodeRuntime.runMain)

@effect/cli@0.63.0

Patch Changes

@effect/cluster@0.36.0

Patch Changes

@effect/experimental@0.48.0

Patch Changes

@effect/opentelemetry@0.50.0

Patch Changes

@effect/platform@0.84.0

Patch Changes

@effect/platform-browser@0.64.0

Patch Changes

@effect/platform-bun@0.67.0

Patch Changes

@effect/platform-node@0.83.0

Patch Changes

@effect/platform-node-shared@0.37.0

Patch Changes

@effect/printer@0.44.0

Patch Changes

@effect/printer-ansi@0.44.0

Patch Changes

@effect/rpc@0.61.0

Patch Changes

@effect/sql@0.37.0

Patch Changes

@effect/sql-clickhouse@0.25.0

Patch Changes

@effect/sql-d1@0.35.0

Patch Changes

@effect/sql-drizzle@0.36.0

Patch Changes

@effect/sql-kysely@0.33.0

Patch Changes

@effect/sql-libsql@0.27.0

Patch Changes

@effect/sql-mssql@0.38.0

Patch Changes

@effect/sql-mysql2@0.38.0

Patch Changes

@effect/sql-pg@0.38.0

Patch Changes

@effect/sql-sqlite-bun@0.38.0

Patch Changes

@effect/sql-sqlite-do@0.15.0

Patch Changes

@effect/sql-sqlite-node@0.38.0

Patch Changes

@effect/sql-sqlite-react-native@0.40.0

Patch Changes

@effect/sql-sqlite-wasm@0.38.0

Patch Changes

@effect/typeclass@0.35.0

Patch Changes

@effect/vitest@0.23.0

Patch Changes

@github-project-automation github-project-automation bot moved this to Discussion Ongoing in PR Backlog May 27, 2025
@tim-smart tim-smart merged commit 6866230 into main May 27, 2025
@tim-smart tim-smart deleted the changeset-release/main branch May 27, 2025 22:24
@github-project-automation github-project-automation bot moved this from Discussion Ongoing to Done in PR Backlog May 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

2 participants