Skip to content

Commit 479ec6c

Browse files
authored
Merge pull request #73 from lumalabs/release-please--branches--main--changes--next
release: 1.2.2
2 parents 7a1b1e6 + ef23323 commit 479ec6c

16 files changed

+137
-65
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "1.2.1"
2+
".": "1.2.2"
33
}

.stats.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
configured_endpoints: 8
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/luma-ai-karanganesan%2Fluma_ai-15f705a5789a4671a5cba160123f7325eff333b93dab4292e25ee92e2ef15a68.yml
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/luma-ai-karanganesan%2Fluma_ai-15f705a5789a4671a5cba160123f7325eff333b93dab4292e25ee92e2ef15a68.yml

CHANGELOG.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,23 @@
11
# Changelog
22

3+
## 1.2.2 (2024-12-17)
4+
5+
Full Changelog: [v1.2.1...v1.2.2](https://github.com/lumalabs/lumaai-python/compare/v1.2.1...v1.2.2)
6+
7+
### Chores
8+
9+
* **internal:** add support for TypeAliasType ([#75](https://github.com/lumalabs/lumaai-python/issues/75)) ([487ea05](https://github.com/lumalabs/lumaai-python/commit/487ea05a5a5a1e25d311049b58c43e0b781fcb11))
10+
* **internal:** bump pyright ([#74](https://github.com/lumalabs/lumaai-python/issues/74)) ([684e61b](https://github.com/lumalabs/lumaai-python/commit/684e61b45dd704c13b38585a307a1fbc39796fe5))
11+
* **internal:** codegen related update ([#72](https://github.com/lumalabs/lumaai-python/issues/72)) ([2b28fc6](https://github.com/lumalabs/lumaai-python/commit/2b28fc615f654f7b87efc24f712390be53775141))
12+
* **internal:** codegen related update ([#76](https://github.com/lumalabs/lumaai-python/issues/76)) ([d16f720](https://github.com/lumalabs/lumaai-python/commit/d16f720b4c14d5de970808840e1621924a7bd1fa))
13+
* **internal:** codegen related update ([#77](https://github.com/lumalabs/lumaai-python/issues/77)) ([9bf4a43](https://github.com/lumalabs/lumaai-python/commit/9bf4a435ac0be28c6db5fb2950ff650e1584332a))
14+
* **internal:** updated imports ([#78](https://github.com/lumalabs/lumaai-python/issues/78)) ([3f247e8](https://github.com/lumalabs/lumaai-python/commit/3f247e8c1f1be4927bbc85d51f12de6ab7308496))
15+
16+
17+
### Documentation
18+
19+
* **readme:** example snippet for client context manager ([#79](https://github.com/lumalabs/lumaai-python/issues/79)) ([ddf9360](https://github.com/lumalabs/lumaai-python/commit/ddf9360a133c06f5ff1271cea46ec202c2b989cb))
20+
321
## 1.2.1 (2024-12-04)
422

523
Full Changelog: [v1.2.0...v1.2.1](https://github.com/lumalabs/lumaai-python/compare/v1.2.0...v1.2.1)

README.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -288,18 +288,19 @@ can also get all the extra fields on the Pydantic model as a dict with
288288

289289
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
290290

291-
- Support for proxies
292-
- Custom transports
291+
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
292+
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
293293
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
294294

295295
```python
296+
import httpx
296297
from lumaai import LumaAI, DefaultHttpxClient
297298

298299
client = LumaAI(
299300
# Or use the `LUMAAI_BASE_URL` env var
300301
base_url="http://my.test.server.example.com:8083",
301302
http_client=DefaultHttpxClient(
302-
proxies="http://my.test.proxy.example.com",
303+
proxy="http://my.test.proxy.example.com",
303304
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
304305
),
305306
)
@@ -315,6 +316,16 @@ client.with_options(http_client=DefaultHttpxClient(...))
315316

316317
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
317318

319+
```py
320+
from lumaai import LumaAI
321+
322+
with LumaAI() as client:
323+
# make requests here
324+
...
325+
326+
# HTTP client is now closed
327+
```
328+
318329
## Versioning
319330

320331
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "lumaai"
3-
version = "1.2.1"
3+
version = "1.2.2"
44
description = "The official Python library for the lumaai API"
55
dynamic = ["readme"]
66
license = "Apache-2.0"
@@ -10,7 +10,7 @@ authors = [
1010
dependencies = [
1111
"httpx>=0.23.0, <1",
1212
"pydantic>=1.9.0, <3",
13-
"typing-extensions>=4.7, <5",
13+
"typing-extensions>=4.10, <5",
1414
"anyio>=3.5.0, <5",
1515
"distro>=1.7.0, <2",
1616
"sniffio",

requirements-dev.lock

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,13 +62,13 @@ platformdirs==3.11.0
6262
# via virtualenv
6363
pluggy==1.5.0
6464
# via pytest
65-
pydantic==2.9.2
65+
pydantic==2.10.3
6666
# via lumaai
67-
pydantic-core==2.23.4
67+
pydantic-core==2.27.1
6868
# via pydantic
6969
pygments==2.18.0
7070
# via rich
71-
pyright==1.1.389
71+
pyright==1.1.390
7272
pytest==8.3.3
7373
# via pytest-asyncio
7474
pytest-asyncio==0.24.0

requirements.lock

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ httpx==0.25.2
3030
idna==3.4
3131
# via anyio
3232
# via httpx
33-
pydantic==2.9.2
33+
pydantic==2.10.3
3434
# via lumaai
35-
pydantic-core==2.23.4
35+
pydantic-core==2.27.1
3636
# via pydantic
3737
sniffio==1.3.0
3838
# via anyio

src/lumaai/_client.py

Lines changed: 28 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
import httpx
1010

11-
from . import resources, _exceptions
11+
from . import _exceptions
1212
from ._qs import Querystring
1313
from ._types import (
1414
NOT_GIVEN,
@@ -24,31 +24,23 @@
2424
get_async_library,
2525
)
2626
from ._version import __version__
27+
from .resources import ping, credits
2728
from ._streaming import Stream as Stream, AsyncStream as AsyncStream
2829
from ._exceptions import LumaAIError, APIStatusError
2930
from ._base_client import (
3031
DEFAULT_MAX_RETRIES,
3132
SyncAPIClient,
3233
AsyncAPIClient,
3334
)
35+
from .resources.generations import generations
3436

35-
__all__ = [
36-
"Timeout",
37-
"Transport",
38-
"ProxiesTypes",
39-
"RequestOptions",
40-
"resources",
41-
"LumaAI",
42-
"AsyncLumaAI",
43-
"Client",
44-
"AsyncClient",
45-
]
37+
__all__ = ["Timeout", "Transport", "ProxiesTypes", "RequestOptions", "LumaAI", "AsyncLumaAI", "Client", "AsyncClient"]
4638

4739

4840
class LumaAI(SyncAPIClient):
49-
generations: resources.GenerationsResource
50-
ping: resources.PingResource
51-
credits: resources.CreditsResource
41+
generations: generations.GenerationsResource
42+
ping: ping.PingResource
43+
credits: credits.CreditsResource
5244
with_raw_response: LumaAIWithRawResponse
5345
with_streaming_response: LumaAIWithStreamedResponse
5446

@@ -106,9 +98,9 @@ def __init__(
10698
_strict_response_validation=_strict_response_validation,
10799
)
108100

109-
self.generations = resources.GenerationsResource(self)
110-
self.ping = resources.PingResource(self)
111-
self.credits = resources.CreditsResource(self)
101+
self.generations = generations.GenerationsResource(self)
102+
self.ping = ping.PingResource(self)
103+
self.credits = credits.CreditsResource(self)
112104
self.with_raw_response = LumaAIWithRawResponse(self)
113105
self.with_streaming_response = LumaAIWithStreamedResponse(self)
114106

@@ -218,9 +210,9 @@ def _make_status_error(
218210

219211

220212
class AsyncLumaAI(AsyncAPIClient):
221-
generations: resources.AsyncGenerationsResource
222-
ping: resources.AsyncPingResource
223-
credits: resources.AsyncCreditsResource
213+
generations: generations.AsyncGenerationsResource
214+
ping: ping.AsyncPingResource
215+
credits: credits.AsyncCreditsResource
224216
with_raw_response: AsyncLumaAIWithRawResponse
225217
with_streaming_response: AsyncLumaAIWithStreamedResponse
226218

@@ -278,9 +270,9 @@ def __init__(
278270
_strict_response_validation=_strict_response_validation,
279271
)
280272

281-
self.generations = resources.AsyncGenerationsResource(self)
282-
self.ping = resources.AsyncPingResource(self)
283-
self.credits = resources.AsyncCreditsResource(self)
273+
self.generations = generations.AsyncGenerationsResource(self)
274+
self.ping = ping.AsyncPingResource(self)
275+
self.credits = credits.AsyncCreditsResource(self)
284276
self.with_raw_response = AsyncLumaAIWithRawResponse(self)
285277
self.with_streaming_response = AsyncLumaAIWithStreamedResponse(self)
286278

@@ -391,30 +383,30 @@ def _make_status_error(
391383

392384
class LumaAIWithRawResponse:
393385
def __init__(self, client: LumaAI) -> None:
394-
self.generations = resources.GenerationsResourceWithRawResponse(client.generations)
395-
self.ping = resources.PingResourceWithRawResponse(client.ping)
396-
self.credits = resources.CreditsResourceWithRawResponse(client.credits)
386+
self.generations = generations.GenerationsResourceWithRawResponse(client.generations)
387+
self.ping = ping.PingResourceWithRawResponse(client.ping)
388+
self.credits = credits.CreditsResourceWithRawResponse(client.credits)
397389

398390

399391
class AsyncLumaAIWithRawResponse:
400392
def __init__(self, client: AsyncLumaAI) -> None:
401-
self.generations = resources.AsyncGenerationsResourceWithRawResponse(client.generations)
402-
self.ping = resources.AsyncPingResourceWithRawResponse(client.ping)
403-
self.credits = resources.AsyncCreditsResourceWithRawResponse(client.credits)
393+
self.generations = generations.AsyncGenerationsResourceWithRawResponse(client.generations)
394+
self.ping = ping.AsyncPingResourceWithRawResponse(client.ping)
395+
self.credits = credits.AsyncCreditsResourceWithRawResponse(client.credits)
404396

405397

406398
class LumaAIWithStreamedResponse:
407399
def __init__(self, client: LumaAI) -> None:
408-
self.generations = resources.GenerationsResourceWithStreamingResponse(client.generations)
409-
self.ping = resources.PingResourceWithStreamingResponse(client.ping)
410-
self.credits = resources.CreditsResourceWithStreamingResponse(client.credits)
400+
self.generations = generations.GenerationsResourceWithStreamingResponse(client.generations)
401+
self.ping = ping.PingResourceWithStreamingResponse(client.ping)
402+
self.credits = credits.CreditsResourceWithStreamingResponse(client.credits)
411403

412404

413405
class AsyncLumaAIWithStreamedResponse:
414406
def __init__(self, client: AsyncLumaAI) -> None:
415-
self.generations = resources.AsyncGenerationsResourceWithStreamingResponse(client.generations)
416-
self.ping = resources.AsyncPingResourceWithStreamingResponse(client.ping)
417-
self.credits = resources.AsyncCreditsResourceWithStreamingResponse(client.credits)
407+
self.generations = generations.AsyncGenerationsResourceWithStreamingResponse(client.generations)
408+
self.ping = ping.AsyncPingResourceWithStreamingResponse(client.ping)
409+
self.credits = credits.AsyncCreditsResourceWithStreamingResponse(client.credits)
418410

419411

420412
Client = LumaAI

src/lumaai/_models.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@
4646
strip_not_given,
4747
extract_type_arg,
4848
is_annotated_type,
49+
is_type_alias_type,
4950
strip_annotated_type,
5051
)
5152
from ._compat import (
@@ -428,6 +429,8 @@ def construct_type(*, value: object, type_: object) -> object:
428429
# we allow `object` as the input type because otherwise, passing things like
429430
# `Literal['value']` will be reported as a type error by type checkers
430431
type_ = cast("type[object]", type_)
432+
if is_type_alias_type(type_):
433+
type_ = type_.__value__ # type: ignore[unreachable]
431434

432435
# unwrap `Annotated[T, ...]` -> `T`
433436
if is_annotated_type(type_):

src/lumaai/_response.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
import pydantic
2626

2727
from ._types import NoneType
28-
from ._utils import is_given, extract_type_arg, is_annotated_type, extract_type_var_from_base
28+
from ._utils import is_given, extract_type_arg, is_annotated_type, is_type_alias_type, extract_type_var_from_base
2929
from ._models import BaseModel, is_basemodel
3030
from ._constants import RAW_RESPONSE_HEADER, OVERRIDE_CAST_TO_HEADER
3131
from ._streaming import Stream, AsyncStream, is_stream_class_type, extract_stream_chunk_type
@@ -126,9 +126,15 @@ def __repr__(self) -> str:
126126
)
127127

128128
def _parse(self, *, to: type[_T] | None = None) -> R | _T:
129+
cast_to = to if to is not None else self._cast_to
130+
131+
# unwrap `TypeAlias('Name', T)` -> `T`
132+
if is_type_alias_type(cast_to):
133+
cast_to = cast_to.__value__ # type: ignore[unreachable]
134+
129135
# unwrap `Annotated[T, ...]` -> `T`
130-
if to and is_annotated_type(to):
131-
to = extract_type_arg(to, 0)
136+
if cast_to and is_annotated_type(cast_to):
137+
cast_to = extract_type_arg(cast_to, 0)
132138

133139
if self._is_sse_stream:
134140
if to:
@@ -164,18 +170,12 @@ def _parse(self, *, to: type[_T] | None = None) -> R | _T:
164170
return cast(
165171
R,
166172
stream_cls(
167-
cast_to=self._cast_to,
173+
cast_to=cast_to,
168174
response=self.http_response,
169175
client=cast(Any, self._client),
170176
),
171177
)
172178

173-
cast_to = to if to is not None else self._cast_to
174-
175-
# unwrap `Annotated[T, ...]` -> `T`
176-
if is_annotated_type(cast_to):
177-
cast_to = extract_type_arg(cast_to, 0)
178-
179179
if cast_to is NoneType:
180180
return cast(R, None)
181181

0 commit comments

Comments
 (0)