Skip to content

Conversation

@ntBre
Copy link
Contributor

@ntBre ntBre commented Nov 11, 2025

Summary

This PR makes two changes to our formatting of lambda expressions:

  1. We now parenthesize the body expression if it expands
  2. We now try to keep the parameters on a single line

The latter of these fixes #8179:

Black formatting and this PR's formatting:

def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        ),
    )

Stable Ruff formatting

def a():
    return b(
        c,
        d,
        e,
        f=lambda self,
        *args,
        **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
    )

We don't parenthesize the body expression here because the call to aaaa... has its own parentheses, but adding a binary operator shows the new parenthesization:

@@ -3,7 +3,7 @@
         c,
         d,
         e,
-        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
-            *args, **kwargs
-        ) + 1,
+        f=lambda self, *args, **kwargs: (
+            aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
+        ),
     )

This is actually a new divergence from Black, which formats this input like this:

def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        )
        + 1,
    )

But I think this is an improvement, unlike the case from #8179.

Test Plan

New tests taken from #8465 and probably a few more I should grab from the ecosystem results.

@ntBre ntBre added formatter Related to the formatter preview Related to preview mode features labels Nov 11, 2025
@astral-sh-bot
Copy link

astral-sh-bot bot commented Nov 11, 2025

ruff-ecosystem results

Formatter (stable)

✅ ecosystem check detected no format changes.

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+364 -296 lines in 52 files in 18 projects; 37 projects unchanged)

RasaHQ/rasa (+6 -6 lines across 2 files)

ruff format --preview

rasa/nlu/extractors/crf_entity_extractor.py~L101

         CRFEntityExtractorOptions.SUFFIX1: lambda crf_token: crf_token.text[-1:],
         CRFEntityExtractorOptions.BIAS: lambda _: "bias",
         CRFEntityExtractorOptions.POS: lambda crf_token: crf_token.pos_tag,
-        CRFEntityExtractorOptions.POS2: lambda crf_token: crf_token.pos_tag[:2]
-        if crf_token.pos_tag is not None
-        else None,
+        CRFEntityExtractorOptions.POS2: lambda crf_token: (
+            crf_token.pos_tag[:2] if crf_token.pos_tag is not None else None
+        ),
         CRFEntityExtractorOptions.UPPER: lambda crf_token: crf_token.text.isupper(),
         CRFEntityExtractorOptions.DIGIT: lambda crf_token: crf_token.text.isdigit(),
         CRFEntityExtractorOptions.PATTERN: lambda crf_token: crf_token.pattern,

rasa/nlu/featurizers/sparse_featurizer/lexical_syntactic_featurizer.py~L86

         "suffix2": lambda token: token.text[-2:],
         "suffix1": lambda token: token.text[-1:],
         "pos": lambda token: token.data.get(POS_TAG_KEY, None),
-        "pos2": lambda token: token.data.get(POS_TAG_KEY, [])[:2]
-        if POS_TAG_KEY in token.data
-        else None,
+        "pos2": lambda token: (
+            token.data.get(POS_TAG_KEY, [])[:2] if POS_TAG_KEY in token.data else None
+        ),
         "upper": lambda token: token.text.isupper(),
         "digit": lambda token: token.text.isdigit(),
     }

apache/airflow (+22 -15 lines across 4 files)

ruff format --preview

airflow-core/tests/unit/cli/commands/test_config_command.py~L355

     def test_lint_detects_multiple_issues(self, stdout_capture):
         with mock.patch(
             "airflow.configuration.conf.has_option",
-            side_effect=lambda section, option, lookup_from_deprecated: option
-            in ["check_slas", "strict_dataset_uri_validation"],
+            side_effect=lambda section, option, lookup_from_deprecated: (
+                option in ["check_slas", "strict_dataset_uri_validation"]
+            ),
         ):
             with stdout_capture as temp_stdout:
                 config_command.lint_config(cli_parser.get_parser().parse_args(["config", "lint"]))

providers/docker/tests/unit/docker/operators/test_docker.py~L154

 
         # If logs() is called with tail then only return the last value, otherwise return the whole log.
         self.client_mock.logs.side_effect = (
-            lambda **kwargs: iter(self.log_messages[-kwargs["tail"] :])
-            if "tail" in kwargs
-            else iter(self.log_messages)
+            lambda **kwargs: (
+                iter(self.log_messages[-kwargs["tail"] :]) if "tail" in kwargs else iter(self.log_messages)
+            )
         )
 
         docker_api_client_patcher.return_value = self.client_mock

providers/docker/tests/unit/docker/operators/test_docker.py~L623

         self.client_mock.attach.return_value = iter([b"container log 1 ", b"container log 2"])
         # Make sure the logs side effect is updated after the change
         self.client_mock.attach.side_effect = (
-            lambda **kwargs: iter(self.log_messages[-kwargs["tail"] :])
-            if "tail" in kwargs
-            else iter(self.log_messages)
+            lambda **kwargs: (
+                iter(self.log_messages[-kwargs["tail"] :]) if "tail" in kwargs else iter(self.log_messages)
+            )
         )
 
         kwargs = {

providers/google/tests/unit/google/cloud/hooks/test_gcs.py~L420

         mock_copy.return_value = storage.Blob(
             name=destination_object_name, bucket=storage.Bucket(mock_service, destination_bucket_name)
         )
-        mock_service.return_value.bucket.side_effect = lambda name: (
-            source_bucket
-            if name == source_bucket_name
-            else storage.Bucket(mock_service, destination_bucket_name)
+        mock_service.return_value.bucket.side_effect = (
+            lambda name: (
+                source_bucket
+                if name == source_bucket_name
+                else storage.Bucket(mock_service, destination_bucket_name)
+            )
         )
 
         self.gcs_hook.copy(

providers/google/tests/unit/google/cloud/hooks/test_gcs.py~L510

         blob = MagicMock(spec=storage.Blob)
         blob.rewrite = MagicMock(return_value=(None, None, None))
         dest_bucket.blob = MagicMock(return_value=blob)
-        mock_service.return_value.bucket.side_effect = lambda name: (
-            storage.Bucket(mock_service, source_bucket_name) if name == source_bucket_name else dest_bucket
+        mock_service.return_value.bucket.side_effect = (
+            lambda name: (
+                storage.Bucket(mock_service, source_bucket_name)
+                if name == source_bucket_name
+                else dest_bucket
+            )
         )
 
         self.gcs_hook.rewrite(

providers/http/tests/unit/http/sensors/test_http.py~L302

             method="GET",
             endpoint="/search",
             data={"client": "ubuntu", "q": "airflow"},
-            response_check=lambda response: ("apache/airflow" in response.text),
+            response_check=lambda response: "apache/airflow" in response.text,
             headers={},
         )
         op.execute({})

apache/superset (+18 -11 lines across 4 files)

ruff format --preview

superset/tags/api.py~L599

     @statsd_metrics
     @rison({"type": "array", "items": {"type": "integer"}})
     @event_logger.log_this_with_context(
-        action=lambda self, *args, **kwargs: f"{self.__class__.__name__}"
-        f".favorite_status",
+        action=lambda self, *args, **kwargs: (
+            f"{self.__class__.__name__}.favorite_status"
+        ),
         log_to_statsd=False,
     )
     def favorite_status(self, **kwargs: Any) -> Response:

superset/tags/api.py~L697

     @safe
     @statsd_metrics
     @event_logger.log_this_with_context(
-        action=lambda self, *args, **kwargs: f"{self.__class__.__name__}"
-        f".remove_favorite",
+        action=lambda self, *args, **kwargs: (
+            f"{self.__class__.__name__}.remove_favorite"
+        ),
         log_to_statsd=False,
     )
     def remove_favorite(self, pk: int) -> Response:

tests/integration_tests/security/api_tests.py~L187

         self.assert500(self._get_guest_token_with_rls(rls_rule))
 
     @with_config({
-        "GUEST_TOKEN_VALIDATOR_HOOK": lambda x: len(x["rls"]) == 1
-        and "tenant_id=" in x["rls"][0]["clause"]
+        "GUEST_TOKEN_VALIDATOR_HOOK": lambda x: (
+            len(x["rls"]) == 1 and "tenant_id=" in x["rls"][0]["clause"]
+        )
     })
     def test_guest_validator_hook_real_world_example_positive(self):
         """

tests/integration_tests/security/api_tests.py~L201

         self.assert200(self._get_guest_token_with_rls(rls_rule))
 
     @with_config({
-        "GUEST_TOKEN_VALIDATOR_HOOK": lambda x: len(x["rls"]) == 1
-        and "tenant_id=" in x["rls"][0]["clause"]
+        "GUEST_TOKEN_VALIDATOR_HOOK": lambda x: (
+            len(x["rls"]) == 1 and "tenant_id=" in x["rls"][0]["clause"]
+        )
     })
     def test_guest_validator_hook_real_world_example_negative(self):
         """

tests/unit_tests/importexport/api_test.py~L48

     mocked_export_result = [
         (
             "metadata.yaml",
-            lambda: "version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n",  # noqa: E501
+            lambda: (
+                "version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n"
+            ),  # noqa: E501
         ),
         ("databases/example.yaml", lambda: "<DATABASE CONTENTS>"),
     ]

tests/unit_tests/utils/test_core.py~L635

 
 
 @with_config({
-    "USER_AGENT_FUNC": lambda database,
-    source: f"{database.database_name} {source.name}"
+    "USER_AGENT_FUNC": lambda database, source: (
+        f"{database.database_name} {source.name}"
+    )
 })
 def test_get_user_agent_custom(mocker: MockerFixture, app_context: None) -> None:
     database_mock = mocker.MagicMock()

aws/aws-sam-cli (+19 -14 lines across 2 files)

ruff format --preview

samcli/lib/cli_validation/image_repository_validation.py~L70

 
             validators = [
                 Validator(
-                    validation_function=lambda: bool(image_repository)
-                    + bool(image_repositories)
-                    + bool(resolve_image_repos)
-                    > 1,
+                    validation_function=lambda: (
+                        bool(image_repository) + bool(image_repositories) + bool(resolve_image_repos) > 1
+                    ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories",
                         ctx=ctx,

samcli/lib/cli_validation/image_repository_validation.py~L82

                     ),
                 ),
                 Validator(
-                    validation_function=lambda: not guided
-                    and not (image_repository or image_repositories or resolve_image_repos)
-                    and required,
+                    validation_function=lambda: (
+                        not guided and not (image_repository or image_repositories or resolve_image_repos) and required
+                    ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories",
                         ctx=ctx,

samcli/lib/cli_validation/image_repository_validation.py~L92

                     ),
                 ),
                 Validator(
-                    validation_function=lambda: not guided
-                    and (
-                        image_repositories
-                        and not resolve_image_repos
-                        and not _is_all_image_funcs_provided(template_file, image_repositories, parameters_overrides)
+                    validation_function=lambda: (
+                        not guided
+                        and (
+                            image_repositories
+                            and not resolve_image_repos
+                            and not _is_all_image_funcs_provided(
+                                template_file, image_repositories, parameters_overrides
+                            )
+                        )
                     ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories", ctx=ctx, message=image_repos_error_msg

tests/unit/commands/deploy/test_guided_context.py~L35

         self.companion_stack_manager_mock.return_value.get_unreferenced_repos.return_value = [
             self.unreferenced_repo_mock
         ]
-        self.companion_stack_manager_mock.return_value.get_repo_uri = lambda repo: (
-            "123456789012.dkr.ecr.us-east-1.amazonaws.com/test2" if repo == self.unreferenced_repo_mock else None
+        self.companion_stack_manager_mock.return_value.get_repo_uri = (
+            lambda repo: (
+                "123456789012.dkr.ecr.us-east-1.amazonaws.com/test2" if repo == self.unreferenced_repo_mock else None
+            )
         )
 
         self.verify_image_patch = patch(

binary-husky/gpt_academic (+22 -16 lines across 2 files)

ruff format --preview

crazy_functions/crazy_utils.py~L300

         exceeded_cnt = 0
         mutable[index][2] = "执行中"
         detect_timeout = (
-            lambda: len(mutable[index]) >= 2
-            and (time.time() - mutable[index][1]) > watch_dog_patience
+            lambda: (
+                len(mutable[index]) >= 2
+                and (time.time() - mutable[index][1]) > watch_dog_patience
+            )
         )
         while True:
             # watchdog error

crazy_functions/review_fns/paper_processor/paper_llm_ranker.py~L143

                     )
                 elif search_criteria.query_type == "review":
                     papers.sort(
-                        key=lambda x: 1
-                        if any(
-                            keyword in (getattr(x, "title", "") or "").lower()
-                            or keyword in (getattr(x, "abstract", "") or "").lower()
-                            for keyword in ["review", "survey", "overview"]
-                        )
-                        else 0,
+                        key=lambda x: (
+                            1
+                            if any(
+                                keyword in (getattr(x, "title", "") or "").lower()
+                                or keyword in (getattr(x, "abstract", "") or "").lower()
+                                for keyword in ["review", "survey", "overview"]
+                            )
+                            else 0
+                        ),
                         reverse=True,
                     )
             return papers[:top_k]

crazy_functions/review_fns/paper_processor/paper_llm_ranker.py~L164

         if search_criteria and search_criteria.query_type == "review":
             papers = sorted(
                 papers,
-                key=lambda x: 1
-                if any(
-                    keyword in (getattr(x, "title", "") or "").lower()
-                    or keyword in (getattr(x, "abstract", "") or "").lower()
-                    for keyword in ["review", "survey", "overview"]
-                )
-                else 0,
+                key=lambda x: (
+                    1
+                    if any(
+                        keyword in (getattr(x, "title", "") or "").lower()
+                        or keyword in (getattr(x, "abstract", "") or "").lower()
+                        for keyword in ["review", "survey", "overview"]
+                    )
+                    else 0
+                ),
                 reverse=True,
             )
 

ibis-project/ibis (+126 -106 lines across 5 files)

ruff format --preview

ibis/backends/datafusion/init.py~L246

 
         for name, func in inspect.getmembers(
             udfs,
-            predicate=lambda m: callable(m)
-            and not m.__name__.startswith("_")
-            and m.__module__ == udfs.__name__,
+            predicate=lambda m: (
+                callable(m)
+                and not m.__name__.startswith("_")
+                and m.__module__ == udfs.__name__
+            ),
         ):
             annotations = typing.get_type_hints(func)
             argnames = list(inspect.signature(func).parameters.keys())

ibis/backends/sql/dialects.py~L241

             sge.ArrayAgg: rename_func("array_agg"),
             sge.ArraySort: rename_func("array_sort"),
             sge.Length: rename_func("char_length"),
-            sge.TryCast: lambda self,
-            e: f"TRY_CAST({e.this.sql(self.dialect)} AS {e.to.sql(self.dialect)})",
+            sge.TryCast: lambda self, e: (
+                f"TRY_CAST({e.this.sql(self.dialect)} AS {e.to.sql(self.dialect)})"
+            ),
             sge.DayOfYear: rename_func("dayofyear"),
             sge.DayOfWeek: rename_func("dayofweek"),
             sge.DayOfMonth: rename_func("dayofmonth"),

ibis/backends/tests/test_aggregation.py~L1278

         )
         .groupby("bigint_col")
         .string_col.agg(
-            lambda s: (np.nan if pd.isna(s).all() else pandas_sep.join(s.values))
+            lambda s: np.nan if pd.isna(s).all() else pandas_sep.join(s.values)
         )
         .rename("tmp")
         .sort_index()

ibis/backends/tests/tpc/ds/test_queries.py~L35

         )
         .join(customer, _.ctr_customer_sk == customer.c_customer_sk)
         .filter(
-            lambda t: t.ctr_total_return
-            > ctr2.filter(t.ctr_store_sk == ctr2.ctr_store_sk)
-            .ctr_total_return.mean()
-            .as_scalar()
-            * 1.2
+            lambda t: (
+                t.ctr_total_return
+                > ctr2.filter(t.ctr_store_sk == ctr2.ctr_store_sk)
+                .ctr_total_return.mean()
+                .as_scalar()
+                * 1.2
+            )
         )
         .select(_.c_customer_id)
         .order_by(_.c_customer_id)

ibis/backends/tests/tpc/ds/test_queries.py~L783

                 > 0
             ),
             lambda t: (
-                web_sales.join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.c_customer_sk == web_sales.ws_bill_customer_sk,
-                    _.d_year == 2002,
-                    _.d_moy.between(1, 1 + 3),
+                (
+                    web_sales.join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.c_customer_sk == web_sales.ws_bill_customer_sk,
+                        _.d_year == 2002,
+                        _.d_moy.between(1, 1 + 3),
+                    )
+                    .count()
+                    > 0
                 )
-                .count()
-                > 0
-            )
-            | (
-                catalog_sales.join(date_dim, [("cs_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.c_customer_sk == catalog_sales.cs_ship_customer_sk,
-                    _.d_year == 2002,
-                    _.d_moy.between(1, 1 + 3),
+                | (
+                    catalog_sales.join(date_dim, [("cs_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.c_customer_sk == catalog_sales.cs_ship_customer_sk,
+                        _.d_year == 2002,
+                        _.d_moy.between(1, 1 + 3),
+                    )
+                    .count()
+                    > 0
                 )
-                .count()
-                > 0
             ),
         )
         .group_by(

ibis/backends/tests/tpc/ds/test_queries.py~L1037

             _.d_date.between(date("2002-02-01"), date("2002-04-02")),
             _.ca_state == "GA",
             _.cc_county == "Williamson County",
-            lambda t: catalog_sales.filter(
-                t.cs_order_number == _.cs_order_number,
-                t.cs_warehouse_sk != _.cs_warehouse_sk,
-            ).count()
-            > 0,
-            lambda t: catalog_returns.filter(
-                t.cs_order_number == _.cr_order_number
-            ).count()
-            == 0,
+            lambda t: (
+                catalog_sales.filter(
+                    t.cs_order_number == _.cs_order_number,
+                    t.cs_warehouse_sk != _.cs_warehouse_sk,
+                ).count()
+                > 0
+            ),
+            lambda t: (
+                catalog_returns.filter(t.cs_order_number == _.cr_order_number).count()
+                == 0
+            ),
         )
         .agg(**{
             "order count": _.cs_order_number.nunique(),

ibis/backends/tests/tpc/ds/test_queries.py~L2057

         item.view()
         .filter(
             _.i_manufact_id.between(738, 738 + 40),
-            lambda i1: item.filter(
-                lambda s: (
-                    (i1.i_manufact == s.i_manufact)
-                    & (
-                        (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("powder", "khaki"))
-                            & s.i_units.isin(("Ounce", "Oz"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                        | (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("brown", "honeydew"))
-                            & s.i_units.isin(("Bunch", "Ton"))
-                            & s.i_size.isin(("N/A", "small"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("floral", "deep"))
-                            & s.i_units.isin(("N/A", "Dozen"))
-                            & s.i_size.isin(("petite", "petite"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("light", "cornflower"))
-                            & s.i_units.isin(("Box", "Pound"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                    )
-                )
-                | (
-                    (i1.i_manufact == s.i_manufact)
-                    & (
+            lambda i1: (
+                item.filter(
+                    lambda s: (
                         (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("midnight", "snow"))
-                            & s.i_units.isin(("Pallet", "Gross"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                        | (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("cyan", "papaya"))
-                            & s.i_units.isin(("Cup", "Dram"))
-                            & s.i_size.isin(("N/A", "small"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("orange", "frosted"))
-                            & s.i_units.isin(("Each", "Tbl"))
-                            & s.i_size.isin(("petite", "petite"))
+                            (i1.i_manufact == s.i_manufact)
+                            & (
+                                (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("powder", "khaki"))
+                                    & s.i_units.isin(("Ounce", "Oz"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                                | (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("brown", "honeydew"))
+                                    & s.i_units.isin(("Bunch", "Ton"))
+                                    & s.i_size.isin(("N/A", "small"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("floral", "deep"))
+                                    & s.i_units.isin(("N/A", "Dozen"))
+                                    & s.i_size.isin(("petite", "petite"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("light", "cornflower"))
+                                    & s.i_units.isin(("Box", "Pound"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                            )
                         )
                         | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("forest", "ghost"))
-                            & s.i_units.isin(("Lb", "Bundle"))
-                            & s.i_size.isin(("medium", "extra large"))
+                            (i1.i_manufact == s.i_manufact)
+                            & (
+                                (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("midnight", "snow"))
+                                    & s.i_units.isin(("Pallet", "Gross"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                                | (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("cyan", "papaya"))
+                                    & s.i_units.isin(("Cup", "Dram"))
+                                    & s.i_size.isin(("N/A", "small"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("orange", "frosted"))
+                                    & s.i_units.isin(("Each", "Tbl"))
+                                    & s.i_size.isin(("petite", "petite"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("forest", "ghost"))
+                                    & s.i_units.isin(("Lb", "Bundle"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                            )
                         )
                     )
-                )
-            ).count()
-            > 0,
+                ).count()
+                > 0
+            ),
         )
         .select(_.i_product_name)
         .distinct()

ibis/backends/tests/tpc/ds/test_queries.py~L4491

         customer_total_return.join(customer, [("ctr_customer_sk", "c_customer_sk")])
         .join(customer_address, [("c_current_addr_sk", "ca_address_sk")])
         .filter(
-            lambda ctr1: ctr1.ctr_total_return
-            > (
-                ctr2.filter(ctr1.ctr_state == _.ctr_state).ctr_total_return.mean() * 1.2
-            ).as_scalar(),
+            lambda ctr1: (
+                ctr1.ctr_total_return
+                > (
+                    ctr2.filter(ctr1.ctr_state == _.ctr_state).ctr_total_return.mean()
+                    * 1.2
+                ).as_scalar()
+            ),
             _.ca_state == "GA",
         )
         .select(

ibis/backends/tests/tpc/ds/test_queries.py~L4913

         .filter(
             _.i_manufact_id == 350,
             _.d_date.between(date("2000-01-07"), date("2000-04-26")),
-            lambda t: t.ws_ext_discount_amt
-            > (
-                web_sales.join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.i_item_sk == _.ws_item_sk,
-                    _.d_date.between(date("2000-01-07"), date("2000-04-26")),
+            lambda t: (
+                t.ws_ext_discount_amt
+                > (
+                    web_sales.join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.i_item_sk == _.ws_item_sk,
+                        _.d_date.between(date("2000-01-07"), date("2000-04-26")),
+                    )
+                    .ws_ext_discount_amt.mean()
+                    .as_scalar()
+                    * 1.3
                 )
-                .ws_ext_discount_amt.mean()
-                .as_scalar()
-                * 1.3
             ),
         )
         .select(_.ws_ext_discount_amt.sum().name("Excess Discount Amount"))

ibis/tests/benchmarks/test_benchmarks.py~L693

 
     path = str(tmp_path_factory.mktemp("duckdb") / "data.ddb")
     sql = (
-        lambda var, table, n=N: f"""
+        lambda var, table, n=N: (
+            f"""
         CREATE TABLE {table} AS
         SELECT ROW_NUMBER() OVER () AS id, {var}
         FROM (

ibis/tests/benchmarks/test_benchmarks.py~L702

             ORDER BY RANDOM()
         )
         """
+        )
     )
 
     with duckdb.connect(path) as cur:

langchain-ai/langchain (+46 -20 lines across 1 file)

ruff format --preview

libs/core/tests/unit_tests/runnables/test_history.py~L53

 
 def test_input_messages() -> None:
     runnable = RunnableLambda(
-        lambda messages: "you said: "
-        + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        lambda messages: (
+            "you said: "
+            + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        )
     )
     store: dict = {}
     get_session_history = _get_get_session_history(store=store)

libs/core/tests/unit_tests/runnables/test_history.py~L82

 
 async def test_input_messages_async() -> None:
     runnable = RunnableLambda(
-        lambda messages: "you said: "
-        + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        lambda messages: (
+            "you said: "
+            + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        )
     )
     store: dict = {}
     get_session_history = _get_get_session_history(store=store)

libs/core/tests/unit_tests/runnables/test_history.py~L113

 
 def test_input_dict() -> None:
     runnable = RunnableLambda(
-        lambda params: "you said: "
-        + "\n".join(
-            str(m.content) for m in params["messages"] if isinstance(m, HumanMessage)
+        lambda params: (
+            "you said: "
+            + "\n".join(
+                str(m.content)
+                for m in params["messages"]
+                if isinstance(m, HumanMessage)
+            )
         )
     )
     get_session_history = _get_get_session_history()

libs/core/tests/unit_tests/runnables/test_history.py~L133

 
 async def test_input_dict_async() -> None:
     runnable = RunnableLambda(
-        lambda params: "you said: "
-        + "\n".join(
-            str(m.content) for m in params["messages"] if isinstance(m, HumanMessage)
+        lambda params: (
+            "you said: "
+            + "\n".join(
+                str(m.content)
+                for m in params["messages"]
+                if isinstance(m, HumanMessage)
+            )
         )
     )
     get_session_history = _get_get_session_history()

libs/core/tests/unit_tests/runnables/test_history.py~L155

 
 def test_input_dict_with_history_key() -> None:
     runnable = RunnableLambda(
-        lambda params: "you said: "
-        + "\n".join(
-            [str(m.content) for m in params["history"] if isinstance(m, HumanMessage)]
-            + [params["input"]]
+        lambda params: (
+            "you said: "
+            + "\n".join(
+                [
+                    str(m.content)
+                    for m in params["history"]
+                    if isinstance(m, HumanMessage)
+                ]
+                + [params["input"]]
+            )
         )
     )
     get_session_history = _get_get_session_history()

libs/core/tests/unit_tests/runnables/test_history.py~L177

 
 async def test_input_dict_with_history_key_async() -> None:
     runnable = RunnableLambda(
-        lambda params: "you said: "
-        + "\n".join(
-            [str(m.content) for m in params["history"] if isinstance(m, HumanMessage)]
-            + [params["input"]]
+        lambda params: (
+            "you said: "
+            + "\n".join(
+                [
+                    str(m.content)
+                    for m in params["history"]
+                    if isinstance(m, HumanMessage)
+                ]
+                + [params["input"]]
+            )
         )
     )
     get_session_history = _get_get_session_history()

libs/core/tests/unit_tests/runnables/test_history.py~L827

 
 def test_get_output_messages_no_value_error() -> None:
     runnable = _RunnableLambdaWithRaiseError(
-        lambda messages: "you said: "
-        + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        lambda messages: (
+            "you said: "
+            + "\n".join(str(m.content) for m in messages if isinstance(m, HumanMessage))
+        )
     )
     store: dict = {}
     get_session_history = _get_get_session_history(store=store)

mlflow/mlflow (+6 -4 lines across 2 files)

ruff format --preview

mlflow/store/model_registry/file_store.py~L898

         model_versions = []
         model_version_dirs = list_all(
             path,
-            filter_func=lambda x: os.path.isdir(x)
-            and os.path.basename(os.path.normpath(x)).startswith("version-"),
+            filter_func=lambda x: (
+                os.path.isdir(x) and os.path.basename(os.path.normpath(x)).startswith("version-")
+            ),
             full_path=True,
         )
         for directory in model_version_dirs:

tests/ag2/test_ag2_autolog.py~L211

     user_proxy = ConversableAgent(
         name="tool_agent",
         llm_config=False,
-        is_termination_msg=lambda msg: msg.get("content") is not None
-        and "TERMINATE" in msg["content"],
+        is_termination_msg=lambda msg: (
+            msg.get("content") is not None and "TERMINATE" in msg["content"]
+        ),
         human_input_mode="NEVER",
     )
     assistant.register_for_llm(name="sum", description="A simple sum calculator")(sum)

pandas-dev/pandas (+6 -5 lines across 2 files)

ruff format --preview

pandas/core/arrays/datetimes.py~L228

     _typ = "datetimearray"
     _internal_fill_value = np.datetime64("NaT", "ns")
     _recognized_scalars = (datetime, np.datetime64)
-    _is_recognized_dtype: Callable[[DtypeObj], bool] = lambda x: lib.is_np_dtype(
-        x, "M"
-    ) or isinstance(x, DatetimeTZDtype)
+    _is_recognized_dtype: Callable[[DtypeObj], bool] = (
+        lambda x: lib.is_np_dtype(x, "M") or isinstance(x, DatetimeTZDtype)
+    )
     _infer_matches = ("datetime", "datetime64", "date")
 
     @property

pandas/tests/series/indexing/test_where.py~L234

     # make sure correct exceptions are raised on invalid list assignment
 
     msg = (
-        lambda x: f"cannot set using a {x} indexer with a "
-        "different length than the value"
+        lambda x: (
+            f"cannot set using a {x} indexer with a different length than the value"
+        )
     )
     # slice
     s = Series(list("abc"), dtype=object)

prefecthq/prefect (+8 -24 lines across 2 files)

ruff format --preview

src/integrations/prefect-kubernetes/tests/test_worker.py~L273

             pod_watch_timeout_seconds=60,
             stream_output=True,
         ),
-        lambda flow_run,
-        deployment,
-        flow,
-        work_pool,
-        worker_name: KubernetesWorkerJobConfiguration(
+        lambda flow_run, deployment, flow, work_pool, worker_name: KubernetesWorkerJobConfiguration(
             command="prefect flow-run execute",
             env={
                 **get_current_settings().to_environment_variables(exclude_unset=True),

src/integrations/prefect-kubernetes/tests/test_worker.py~L589

             pod_watch_timeout_seconds=60,
             stream_output=True,
         ),
-        lambda flow_run,
-        deployment,
-        flow,
-        work_pool,
-        worker_name: KubernetesWorkerJobConfiguration(
+        lambda flow_run, deployment, flow, work_pool, worker_name: KubernetesWorkerJobConfiguration(
             command="prefect flow-run execute",
             env={
                 **get_current_settings().to_environment_variables(exclude_unset=True),

src/integrations/prefect-kubernetes/tests/test_worker.py~L778

             pod_watch_timeout_seconds=90,
             stream_output=False,
         ),
-        lambda flow_run,
-        deployment,
-        flow,
-        work_pool,
-        worker_name: KubernetesWorkerJobConfiguration(
+        lambda flow_run, deployment, flow, work_pool, worker_name: KubernetesWorkerJobConfiguration(
             command="echo hello",
             env={
                 **get_current_settings().to_environment_variables(exclude_unset=True),

src/integrations/prefect-kubernetes/tests/test_worker.py~L1099

             pod_watch_timeout_seconds=90,
             stream_output=True,
         ),
-        lambda flow_run,
-        deployment,
-        flow,
-        work_pool,
-        worker_name: KubernetesWorkerJobConfiguration(
+        lambda flow_run, deployment, flow, work_pool, worker_name: KubernetesWorkerJobConfiguration(
             command="echo hello",
             env={
                 **get_current_settings().to_environment_variables(exclude_unset=True),

src/prefect/server/events/filters.py~L163

 
 class EventOccurredFilter(EventDataFilter):
     since: DateTime = Field(
-        default_factory=lambda: prefect.types._datetime.start_of_day(
-            prefect.types._datetime.now("UTC")
-        )
-        - timedelta(days=180),
+        default_factory=lambda: (
+            prefect.types._datetime.start_of_day(prefect.types._datetime.now("UTC"))
+            - timedelta(days=180)
+        ),
         description="Only include events after this time (inclusive)",
     )
     until: DateTime = Field(

qdrant/qdrant-client (+4 -4 lines across 1 file)

ruff format --preview

tests/congruence_tests/test_common.py~L336

 
     if isinstance(res1, list):
         if is_context_search is True:
-            sorted_1 = sorted(res1, key=lambda x: (x.id))
-            sorted_2 = sorted(res2, key=lambda x: (x.id))
+            sorted_1 = sorted(res1, key=lambda x: x.id)
+            sorted_2 = sorted(res2, key=lambda x: x.id)
             compare_records(sorted_1, sorted_2, abs_tol=1e-5)
         else:
             compare_records(res1, res2)

tests/congruence_tests/test_common.py~L345

         res2, models.QueryResponse
     ):
         if is_context_search is True:
-            sorted_1 = sorted(res1.points, key=lambda x: (x.id))
-            sorted_2 = sorted(res2.points, key=lambda x: (x.id))
+            sorted_1 = sorted(res1.points, key=lambda x: x.id)
+            sorted_2 = sorted(res2.points, key=lambda x: x.id)
             compare_records(sorted_1, sorted_2, abs_tol=1e-5)
         else:
             compare_records(res1.points, res2.points)

rotki/rotki (+23 -32 lines across 9 files)

ruff format --preview

rotkehlchen/api/rest.py~L6129

                 to_timestamp=to_timestamp,
                 address=address,
                 blockchain=chain,
-                get_count_fn=lambda from_ts,
-                to_ts,
-                _chain_id=chain_id: db_evmtx.count_transactions_in_range(  # type: ignore[misc]  # noqa: E501
+                get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range(  # type: ignore[misc]  # noqa: E501
                     chain_id=_chain_id,
                     from_ts=from_ts,
                     to_ts=to_ts,

rotkehlchen/chain/evm/decoding/aave/v3/decoder.py~L426

                 ordered_events=ordered_events,
                 maybe_earned_event=maybe_earned_event,
                 earned_event=earned_event,
-                match_fn=lambda primary,
-                secondary: (  # use symbols due to Monerium and its different versions  # noqa: E501
+                match_fn=lambda primary, secondary: (  # use symbols due to Monerium and its different versions  # noqa: E501
                     (
                         underlying_token := get_single_underlying_token(
                             primary.asset.resolve_to_evm_token()

rotkehlchen/chain/evm/decoding/balancer/decoder.py~L51

             self,
             evm_inquirer=evm_inquirer,
             cache_type_to_check_for_freshness=BALANCER_CACHE_TYPE_MAPPING[counterparty],
-            query_data_method=lambda inquirer,
-            cache_type,
-            msg_aggregator,
-            reload_all: query_balancer_data(  # noqa: E501
+            query_data_method=lambda inquirer, cache_type, msg_aggregator, reload_all: query_balancer_data(  # noqa: E501
                 inquirer=inquirer,
                 cache_type=cache_type,
                 protocol=counterparty,

rotkehlchen/chain/solana/node_inquirer.py~L354

         signatures = []
         while True:
             response: GetSignaturesForAddressResp = self.query(
-                method=lambda client,
-                _before=before,
-                _until=until: client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
+                method=lambda client, _before=before, _until=until: client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
                     account=Pubkey.from_string(address),
                     limit=SIGNATURES_PAGE_SIZE,
                     before=_before,

rotkehlchen/data_import/importers/binance.py~L310

 
         for rows_group in rows_grouped_by_fee.values():
             rows_group.sort(
-                key=lambda x: x["Change"]
-                if same_assets
-                else x["Change"] * price_at_timestamp[x["Coin"]],
+                key=lambda x: (
+                    x["Change"] if same_assets else x["Change"] * price_at_timestamp[x["Coin"]]
+                ),
                 reverse=True,
             )  # noqa: E501
 

rotkehlchen/globaldb/handler.py~L1179

                 entry.protocol,
             ),
             token_type="evm",
-            post_insert_callback=lambda: GlobalDBHandler._add_underlying_tokens(
-                write_cursor=write_cursor,
-                parent_token_identifier=entry.identifier,
-                underlying_tokens=entry.underlying_tokens,
-                chain_id=entry.chain_id,
-            )
-            if entry.underlying_tokens is not None
-            else None,
+            post_insert_callback=lambda: (
+                GlobalDBHandler._add_underlying_tokens(
+                    write_cursor=write_cursor,
+                    parent_token_identifier=entry.identifier,
+                    underlying_tokens=entry.underlying_tokens,
+                    chain_id=entry.chain_id,
+                )
+                if entry.underlying_tokens is not None
+                else None
+            ),
         )
 
     @staticmethod

rotkehlchen/rotkehlchen.py~L617

                         ]
                     },  # noqa: E501
                 ),
-                extra_check_callback=lambda: cursor.execute(
-                    "SELECT COUNT(*) FROM user_added_solana_tokens"
-                ).fetchone()[0]
-                > 0,  # noqa: E501
+                extra_check_callback=lambda: (
+                    cursor.execute("SELECT COUNT(*) FROM user_added_solana_tokens").fetchone()[0]
+                    > 0
+                ),  # noqa: E501
             )
 
     def _logout(self) -> None:

rotkehlchen/tests/exchanges/test_cryptocom.py~L251

         ),
         patch(
             target="rotkehlchen.inquirer.Inquirer.find_usd_price",
-            side_effect=lambda asset: (FVal(112000) if asset == A_BTC else FVal(1)),
+            side_effect=lambda asset: FVal(112000) if asset == A_BTC else FVal(1),
         ),
     ):
         balances, msg = mock_cryptocom.query_balances()

rotkehlchen/tests/unit/test_solana.py~L313

             patch.object(
                 target=solana_manager.node_inquirer,
                 attribute="query",
-                side_effect=lambda method,
-                call_order=None,
-                only_archive_nodes=False,
-                endpoint=expected_endpoint: original_query(  # noqa: E501
+                side_effect=lambda method, call_order=None, only_archive_nodes=False, endpoint=expected_endpoint: original_query(  # noqa: E501
                     method=partial(check_client, method=method, expected_endpoint=endpoint),
                     call_order=call_order,
                     only_archive_nodes=only_archive_nodes,

zulip/zulip (+13 -9 lines across 1 file)

ruff format --preview

zerver/webhooks/github/view.py~L328

             "author": lambda: self.payload["discussion"]["user"]["login"].tame(check_string),
             "url": lambda: self.payload["discussion"]["html_url"].tame(check_string),
             "action": lambda: self.payload["action"].tame(check_string),
-            "configured_title": lambda: f" {self.template_values['title']()}"
-            if self.include_title
-            else "",
+            "configured_title": lambda: (
+                f" {self.template_values['title']()}" if self.include_title else ""
+            ),
             "category": lambda: self.payload["discussion"]["category"]["name"].tame(check_string),
             "title": lambda: self.payload["discussion"]["title"].tame(check_string),
             "body": lambda: self.payload["discussion"]["body"].tame(check_string),

zerver/webhooks/github/view.py~L349

             # locked_reason includes the " as " as prefix,
             # because locked_reason could be null too, in which case,
             # we drop this entire part from the message.
-            "locked_reason": lambda: f" as {self.payload['discussion']['active_lock_reason'].tame(check_string)}"
-            if self.payload["discussion"]["active_lock_reason"]
-            else "",
+            "locked_reason": lambda: (
+                f" as {self.payload['discussion']['active_lock_reason'].tame(check_string)}"
+                if self.payload["discussion"]["active_lock_reason"]
+                else ""
+            ),
             "closed_reason": lambda: self.payload["discussion"]["state_reason"].tame(check_string),
             # answer_field is used to determine which payload field to use.
             # It is either "answer" (for answered action)
             # or "old_answer" (for unanswered action)
-            "answer_field": lambda: "old_answer"
-            if self.payload["action"].tame(check_string) == "unanswered"
-            else "answer",
+            "answer_field": lambda: (
+                "old_answer"
+                if self.payload["action"].tame(check_string) == "unanswered"
+                else "answer"
+            ),
             "answer_url": lambda: self.payload[self.template_values["answer_field"]()][
                 "html_url"
             ].tame(check_string),

indico/indico (+15 -10 lines across 7 files)

ruff format --preview

indico/modules/events/forms.py~L117

     )
     category = CategoryField(
         _('Category'),
-        [UsedIf(lambda form, _: (form.listing.data or not can_create_unlisted_events(session.user))), DataRequired()],
+        [UsedIf(lambda form, _: form.listing.data or not can_create_unlisted_events(session.user)), DataRequired()],
         require_event_creation_rights=True,
         show_event_creation_warning=True,
     )

indico/modules/events/layout/forms.py~L134

     theme = SelectField(
         _('Theme'),
         [Optional(), HiddenUnless('use_custom_css', False)],
-        coerce=lambda x: (x or None),
+        coerce=lambda x: x or None,
         description=_(
             'Currently selected theme of the conference page. Click on the Preview button to '
             'preview and select a different one.'

indico/modules/events/layout/forms.py~L274

 
 
 class CSSSelectionForm(IndicoForm):
-    theme = SelectField(_('Theme'), [Optional()], coerce=lambda x: (x or None))
+    theme = SelectField(_('Theme'), [Optional()], coerce=lambda x: x or None)
 
     def __init__(self, *args, **kwargs):
         event = kwargs.pop('event')

indico/modules/events/management/forms.py~L69

 class EventDataForm(IndicoForm):
     title = StringField(_('Event title'), [DataRequired()])
     description = TextAreaField(_('Description'), widget=TinyMCEWidget(images=True, height=350))
-    url_shortcut = StringField(_('URL shortcut'), filters=[lambda x: (x or None)])
+    url_shortcut = StringField(_('URL shortcut'), filters=[lambda x: x or None])
 
     def __init__(self, *args, event, **kwargs):
         self.event = event

indico/modules/events/papers/schemas.py~L251

         lambda paper, ctx: editable_type_settings[EditableType.paper].get(paper.event, 'submission_enabled')
     )
     editing_enabled = Function(
-        lambda paper, ctx: paper.event.has_feature('editing')
-        and 'paper' in editing_settings.get(paper.event, 'editable_types')
+        lambda paper, ctx: (
+            paper.event.has_feature('editing') and 'paper' in editing_settings.get(paper.event, 'editable_types')
+        )
     )
 
 

indico/modules/events/registration/forms.py~L160

     )
     currency = SelectField(_('Currency'), [DataRequired()], description=_('The currency for new registrations'))
     notification_sender_address = StringField(
-        _('Notification sender address'), [IndicoEmail()], filters=[lambda x: (x or None)]
+        _('Notification sender address'), [IndicoEmail()], filters=[lambda x: x or None]
     )
     message_pending = TextAreaField(_('Message for pending registrations'))
     message_unpaid = TextAreaField(_('Message for unpaid registrations'))

indico/web/breadcrumbs.py~L68

             )
             category = event.category
         if category_url_factory is None:
-            category_url_factory = lambda cat, management: (
-                url_for('categories.manage_content', cat) if management and cat.can_manage(session.user) else cat.url
+            category_url_factory = (
+                lambda cat, management: (
+                    url_for('categories.manage_content', cat)
+                    if management and cat.can_manage(session.user)
+                    else cat.url
+                )
             )
         for cat in category.chain_query[::-1]:
             items.append(Breadcrumb(cat.title, category_url_factory(cat, management=management)))

indico/web/flask/templating_test.py~L133

 
 def test_template_hooks_markup():
     def _make_tpl_hook(name=''):
-        return lambda: (f'&test{name}@{current_plugin.name}' if current_plugin else f'&test{name}')
+        return lambda: f'&test{name}@{current_plugin.name}' if current_plugin else f'&test{name}'
 
     with (
         _register_template_hook_cleanup('test-hook', _make_tpl_hook(1)),

zanieb/huggingface-notebooks (+1 -1 lines across 1 file)

ruff format --preview

course/fr/chapter5/section6_tf.ipynb~L58

    "outputs": [],
    "source": [
     "issues_dataset = issues_dataset.filter(\n",
-    "    lambda x: (x[\"is_pull_request\"] == False and len(x[\"comments\"]) > 0)\n",
+    "    lambda x: x[\"is_pull_request\"] == False and len(x[\"comments\"]) > 0\n",
     ")\n",
     "issues_dataset"
    ]

openai/openai-cookbook (+11 -7 lines across 3 files)

ruff format --preview --exclude examples/mcp/databricks_mcp_cookbook.ipynb,examples/chatgpt/gpt_actions_library/gpt_action_google_drive.ipynb,examples/chatgpt/gpt_actions_library/gpt_action_redshift.ipynb,examples/chatgpt/gpt_actions_library/gpt_action_salesforce.ipynb,

examples/Search_reranking_with_cross-encoders.ipynb~L538

     "output_df[\"probability\"] = output_df[\"logprobs\"].apply(exp)\n",
     "# Reorder based on likelihood of being Yes\n",
     "output_df[\"yes_probability\"] = output_df.apply(\n",
-    "    lambda x: x[\"probability\"] * -1 + 1\n",
-    "    if x[\"prediction\"] == \"No\"\n",
-    "    else x[\"probability\"],\n",
+    "    lambda x: (\n",
+    "        x[\"probability\"] * -1 + 1 if x[\"prediction\"] == \"No\" else x[\"probability\"]\n",
+    "    ),\n",
     "    axis=1,\n",
     ")\n",
     "output_df.head()"

examples/completions_usage_api.ipynb~L1522

     "            plt.pie(\n",
     "                other_projects[\"num_model_requests\"],\n",
     "                labels=other_projects[\"project_id\"],\n",
-    "                autopct=lambda p: f\"{p:.1f}%\\n({int(p * other_total_requests / 100):,})\",\n",
+    "                autopct=lambda p: (\n",
+    "                    f\"{p:.1f}%\\n({int(p * other_total_requests / 100):,})\"\n",
+    "                ),\n",
     "                startangle=140,\n",
     "                textprops={\"fontsize\": 10},\n",
     "            )\n",

examples/vector_databases/pinecone/Using_vision_modality_for_RAG_with_Pinecone.ipynb~L574

    "source": [
     "# Add a column to flag pages with visual content\n",
     "df[\"Visual_Input_Processed\"] = df[\"PageText\"].apply(\n",
-    "    lambda x: \"Y\"\n",
-    "    if \"DESCRIPTION OF THE IMAGE OR CHART\" in x or \"TRANSCRIPTION OF THE TABLE\" in x\n",
-    "    else \"N\"\n",
+    "    lambda x: (\n",
+    "        \"Y\"\n",
+    "        if \"DESCRIPTION OF THE IMAGE OR CHART\" in x or \"TRANSCRIPTION OF THE TABLE\" in x\n",
+    "        else \"N\"\n",
+    "    )\n",
     ")\n",
     "\n",
     "\n",

python-trio/trio (+4 -2 lines across 1 file)

ruff format --preview

src/trio/_core/_tests/test_run.py~L1379

         Matcher(
             ValueError,
             "^Unique Text$",
-            lambda e: isinstance(e.__context__, IndexError)
-            and isinstance(e.__context__.__context__, KeyError),
+            lambda e: (
+                isinstance(e.__context__, IndexError)
+                and isinstance(e.__context__.__context__, KeyError)
+            ),
         ),
     ):
         async with _core.open_nursery() as nursery:

astropy/astropy (+14 -10 lines across 3 files)

ruff format --preview

astropy/io/fits/hdu/table.py~L532

                     )
                 )
 
-            self.req_cards("NAXIS", None, lambda v: (v == 2), 2, option, errs)
-            self.req_cards("BITPIX", None, lambda v: (v == 8), 8, option, errs)
+            self.req_cards("NAXIS", None, lambda v: v == 2, 2, option, errs)
+            self.req_cards("BITPIX", None, lambda v: v == 8, 8, option, errs)
             self.req_cards(
                 "TFIELDS",
                 7,
-                lambda v: (_is_int(v) and v >= 0 and v <= 999),
+                lambda v: _is_int(v) and v >= 0 and v <= 999,
                 0,
                 option,
                 errs,

astropy/io/fits/hdu/table.py~L787

         `TableHDU` verify method.
         """
         errs = super()._verify(option=option)
-        self.req_cards("PCOUNT", None, lambda v: (v == 0), 0, option, errs)
+        self.req_cards("PCOUNT", None, lambda v: v == 0, 0, option, errs)
         tfields = self._header["TFIELDS"]
         for idx in range(tfields):
             self.req_cards("TBCOL" + str(idx + 1), None, _is_int, None, option, errs)

astropy/table/pprint.py~L31

     String format functions and most user functions will not be able to deal
     with masked values, so we wrap them to ensure they are passed to str().
     """
-    return lambda format_, val: (
-        str(val) if val is np.ma.masked else format_func(format_, val)
+    return (
+        lambda format_, val: (
+            str(val) if val is np.ma.masked else format_func(format_, val)
+        )
     )
 
 

astropy/wcs/wcs.py~L3507

                     # equivalently (keep this comment so you can compare eqns):
                     # wcs_new.wcs.crpix[wcs_index] =
                     # (crpix - iview.start)*iview.step + 0.5 - iview.step/2.
-                    scale_pixel = lambda px: (
-                        (px - iview.start - 1.0) / iview.step
-                        + 0.5
-                        + 1.0 / iview.step / 2.0
+                    scale_pixel = (
+                        lambda px: (
+                            (px - iview.start - 1.0) / iview.step
+                            + 0.5
+                            + 1.0 / iview.step / 2.0
+                        )
                     )
                     crp = scale_pixel(crpix)
                     wcs_new.wcs.crpix[wcs_index] = crp

@ntBre ntBre force-pushed the brent/indent-lambda-params branch 3 times, most recently from b65c407 to 68e09d5 Compare November 11, 2025 20:13
ntBre added a commit that referenced this pull request Nov 11, 2025
@ntBre ntBre force-pushed the brent/indent-lambda-params branch from 68e09d5 to 19326a7 Compare November 12, 2025 13:42
@MichaReiser

This comment was marked as resolved.

@ntBre

This comment was marked as resolved.

@MichaReiser

This comment was marked as resolved.

@ntBre
Copy link
Contributor Author

ntBre commented Nov 12, 2025

I reverted the indentation changes and tried throwing in RemoveSoftLinesBuffer. It was rearranging trailing comments on the parameters, so I just skipped the new format if there are any comments present for now, but I assume there's a better approach here, possibly the arbitrary line length you mentioned on Discord.

But the current state of this branch at least resolves the initial deviation from Black reported in #8179.

Comment on lines +857 to +859
+ f=lambda self, araa, kkkwargs, aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, args, kwargs, e=1, f=2, g=2: (
+ d
+ ),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrapping the body here is a bit silly since the ( and the d have the same length. But this seems like it should be quite rare in real code.

@ntBre
Copy link
Contributor Author

ntBre commented Nov 12, 2025

Ecosystem results

Most of them look good with a few exceptions:

  • providers/google/tests/unit/google/cloud/hooks/test_gcs.py~L420

              mock_copy.return_value = storage.Blob(
                  name=destination_object_name, bucket=storage.Bucket(mock_service, destination_bucket_name)
              )
     -        mock_service.return_value.bucket.side_effect = lambda name: (
     -            source_bucket
     -            if name == source_bucket_name
     -            else storage.Bucket(mock_service, destination_bucket_name)
     +        mock_service.return_value.bucket.side_effect = (
     +            lambda name: (
     +                source_bucket
     +                if name == source_bucket_name
     +                else storage.Bucket(mock_service, destination_bucket_name)
     +            )
              )
      
              self.gcs_hook.copy(

    It seems like this should have been fine before? The very last astropy example is like this too.

  • providers/http/tests/unit/http/sensors/test_http.py~L302

                  method="GET",
                  endpoint="/search",
                  data={"client": "ubuntu", "q": "airflow"},
     -            response_check=lambda response: ("apache/airflow" in response.text),
     +            response_check=lambda response: "apache/airflow" in response.text,
                  headers={},
              )
              op.execute({})

    I think this is okay, just worth pointing out that we also remove parentheses if the body doesn't wrap.

  • tests/unit_tests/importexport/api_test.py~L48

          mocked_export_result = [
              (
                  "metadata.yaml",
     -            lambda: "version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n",  # noqa: E501
     +            lambda: (
     +                "version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n"
     +            ),  # noqa: E501
              ),
              ("databases/example.yaml", lambda: "<DATABASE CONTENTS>"),
          ]

    This seems bad, it breaks the noqa comment (although it also fixes E501 in this case). Maybe this is expected, there's another case where we move the noqa comment in the stable formatting

    Here
                              ]
                          },  # noqa: E501
                      ),
     -                extra_check_callback=lambda: cursor.execute(
     -                    "SELECT COUNT(*) FROM user_added_solana_tokens"
     -                ).fetchone()[0]
     -                > 0,  # noqa: E501
     +                extra_check_callback=lambda: (
     +                    cursor.execute("SELECT COUNT(*) FROM user_added_solana_tokens").fetchone()[0]
     +                    > 0
     +                ),  # noqa: E501
                  )
      
          def _logout(self) -> None:
  • ibis/tests/benchmarks/test_benchmarks.py~L693

          path = str(tmp_path_factory.mktemp("duckdb") / "data.ddb")
          sql = (
     -        lambda var, table, n=N: f"""
     +        lambda var, table, n=N: (
     +            f"""
              CREATE TABLE {table} AS
              SELECT ROW_NUMBER() OVER () AS id, {var}
              FROM (

    This seems a bit questionable. The f""" feels like it could take the place of parentheses.

  • rotkehlchen/chain/solana/node_inquirer.py~L354

              signatures = []
              while True:
                  response: GetSignaturesForAddressResp = self.query(
     -                method=lambda client,
     -                _before=before,
     -                _until=until: client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
     +                method=lambda client, _before=before, _until=until: client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
                          account=Pubkey.from_string(address),
                          limit=SIGNATURES_PAGE_SIZE,
                          before=_before,

    This case actually seems to be what the project wants because of the noqa comment, but I'm pretty sure this line is just over their configured line length of 99. Maybe the has_own_parentheses check is too lax here?

@ntBre ntBre changed the title [WIP] Indent lambda parameters if parameters wrap [WIP] Keep lambda parameters on one line and parenthesize the body if it expands Nov 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

formatter Related to the formatter preview Related to preview mode features

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Formatter undocumented deviation: Formatting of long lambda as keyword argument

3 participants