Skip to content

Commit

Permalink
core[patch]: Fix word spelling error in globals.py (langchain-ai#24532
Browse files Browse the repository at this point in the history
)

Fix word spelling error in `globals.py`

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
  • Loading branch information
2 people authored and olgamurraft committed Aug 16, 2024
1 parent 5409c2d commit d6ac4f9
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions libs/core/langchain_core/globals.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def get_verbose() -> bool:
# In the meantime, the `verbose` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_verbose()` yet. Those users are getting deprecation warnings
# directing them to use `set_verbose()` when they import `langhchain.verbose`.
# directing them to use `set_verbose()` when they import `langchain.verbose`.
old_verbose = langchain.verbose
except ImportError:
old_verbose = False
Expand Down Expand Up @@ -142,7 +142,7 @@ def get_debug() -> bool:
# In the meantime, the `debug` setting is considered True if either the old
# or the new value are True. This accommodates users who haven't migrated
# to using `set_debug()` yet. Those users are getting deprecation warnings
# directing them to use `set_debug()` when they import `langhchain.debug`.
# directing them to use `set_debug()` when they import `langchain.debug`.
old_debug = langchain.debug
except ImportError:
old_debug = False
Expand Down Expand Up @@ -213,7 +213,7 @@ def get_llm_cache() -> "BaseCache":
# or the old value if both are falsy. This accommodates users
# who haven't migrated to using `set_llm_cache()` yet.
# Those users are getting deprecation warnings directing them
# to use `set_llm_cache()` when they import `langhchain.llm_cache`.
# to use `set_llm_cache()` when they import `langchain.llm_cache`.
old_llm_cache = langchain.llm_cache
except ImportError:
old_llm_cache = None
Expand Down

0 comments on commit d6ac4f9

Please sign in to comment.