Skip to content

Commit 0668954

Browse files
author
Stainless Bot
committed
chore: update SDK settings (#219)
1 parent 600e707 commit 0668954

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

81 files changed

+1224
-1169
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ $ pip install -r requirements-dev.lock
3232
## Modifying/Adding code
3333

3434
Most of the SDK is generated code, and any modified code will be overridden on the next generation. The
35-
`src/openlayer/lib/` and `examples/` directories are exceptions and will never be overridden.
35+
`src/openlayer-test/lib/` and `examples/` directories are exceptions and will never be overridden.
3636

3737
## Adding and running examples
3838

README.md

Lines changed: 78 additions & 95 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Openlayer Python API library
22

3-
[![PyPI version](https://img.shields.io/pypi/v/openlayer.svg)](https://pypi.org/project/openlayer/)
3+
[![PyPI version](https://img.shields.io/pypi/v/openlayer-test.svg)](https://pypi.org/project/openlayer-test/)
44

55
The Openlayer Python library provides convenient access to the Openlayer REST API from any Python 3.7+
66
application. The library includes type definitions for all request params and response fields,
@@ -16,7 +16,7 @@ The REST API documentation can be found [on openlayer.com](https://openlayer.com
1616

1717
```sh
1818
# install from PyPI
19-
pip install --pre openlayer
19+
pip install --pre openlayer-test
2020
```
2121

2222
## Usage
@@ -25,7 +25,7 @@ The full API of this library can be found in [api.md](api.md).
2525

2626
```python
2727
import os
28-
from openlayer import Openlayer
28+
from openlayer-test import Openlayer
2929

3030
client = Openlayer(
3131
# This is the default and can be omitted
@@ -41,15 +41,13 @@ data_stream_response = client.inference_pipelines.data.stream(
4141
"cost_column_name": "cost",
4242
"timestamp_column_name": "timestamp",
4343
},
44-
rows=[
45-
{
46-
"user_query": "what's the meaning of life?",
47-
"output": "42",
48-
"tokens": 7,
49-
"cost": 0.02,
50-
"timestamp": 1620000000,
51-
}
52-
],
44+
rows=[{
45+
"user_query": "what's the meaning of life?",
46+
"output": "42",
47+
"tokens": 7,
48+
"cost": 0.02,
49+
"timestamp": 1620000000,
50+
}],
5351
)
5452
print(data_stream_response.success)
5553
```
@@ -66,36 +64,32 @@ Simply import `AsyncOpenlayer` instead of `Openlayer` and use `await` with each
6664
```python
6765
import os
6866
import asyncio
69-
from openlayer import AsyncOpenlayer
67+
from openlayer-test import AsyncOpenlayer
7068

7169
client = AsyncOpenlayer(
7270
# This is the default and can be omitted
7371
api_key=os.environ.get("OPENLAYER_API_KEY"),
7472
)
7573

76-
7774
async def main() -> None:
78-
data_stream_response = await client.inference_pipelines.data.stream(
79-
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
80-
config={
81-
"input_variable_names": ["user_query"],
82-
"output_column_name": "output",
83-
"num_of_token_column_name": "tokens",
84-
"cost_column_name": "cost",
85-
"timestamp_column_name": "timestamp",
86-
},
87-
rows=[
88-
{
89-
"user_query": "what's the meaning of life?",
90-
"output": "42",
91-
"tokens": 7,
92-
"cost": 0.02,
93-
"timestamp": 1620000000,
94-
}
95-
],
96-
)
97-
print(data_stream_response.success)
98-
75+
data_stream_response = await client.inference_pipelines.data.stream(
76+
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
77+
config={
78+
"input_variable_names": ["user_query"],
79+
"output_column_name": "output",
80+
"num_of_token_column_name": "tokens",
81+
"cost_column_name": "cost",
82+
"timestamp_column_name": "timestamp",
83+
},
84+
rows=[{
85+
"user_query": "what's the meaning of life?",
86+
"output": "42",
87+
"tokens": 7,
88+
"cost": 0.02,
89+
"timestamp": 1620000000,
90+
}],
91+
)
92+
print(data_stream_response.success)
9993

10094
asyncio.run(main())
10195
```
@@ -113,16 +107,16 @@ Typed requests and responses provide autocomplete and documentation within your
113107

114108
## Handling errors
115109

116-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openlayer.APIConnectionError` is raised.
110+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openlayer-test.APIConnectionError` is raised.
117111

118112
When the API returns a non-success status code (that is, 4xx or 5xx
119-
response), a subclass of `openlayer.APIStatusError` is raised, containing `status_code` and `response` properties.
113+
response), a subclass of `openlayer-test.APIStatusError` is raised, containing `status_code` and `response` properties.
120114

121-
All errors inherit from `openlayer.APIError`.
115+
All errors inherit from `openlayer-test.APIError`.
122116

123117
```python
124-
import openlayer
125-
from openlayer import Openlayer
118+
import openlayer-test
119+
from openlayer-test import Openlayer
126120

127121
client = Openlayer()
128122

@@ -136,22 +130,20 @@ try:
136130
"cost_column_name": "cost",
137131
"timestamp_column_name": "timestamp",
138132
},
139-
rows=[
140-
{
141-
"user_query": "what's the meaning of life?",
142-
"output": "42",
143-
"tokens": 7,
144-
"cost": 0.02,
145-
"timestamp": 1620000000,
146-
}
147-
],
133+
rows=[{
134+
"user_query": "what's the meaning of life?",
135+
"output": "42",
136+
"tokens": 7,
137+
"cost": 0.02,
138+
"timestamp": 1620000000,
139+
}],
148140
)
149-
except openlayer.APIConnectionError as e:
141+
except openlayer-test.APIConnectionError as e:
150142
print("The server could not be reached")
151-
print(e.__cause__) # an underlying Exception, likely raised within httpx.
152-
except openlayer.RateLimitError as e:
143+
print(e.__cause__) # an underlying Exception, likely raised within httpx.
144+
except openlayer-test.RateLimitError as e:
153145
print("A 429 status code was received; we should back off a bit.")
154-
except openlayer.APIStatusError as e:
146+
except openlayer-test.APIStatusError as e:
155147
print("Another non-200-range status code was received")
156148
print(e.status_code)
157149
print(e.response)
@@ -179,7 +171,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
179171
You can use the `max_retries` option to configure or disable retry settings:
180172

181173
```python
182-
from openlayer import Openlayer
174+
from openlayer-test import Openlayer
183175

184176
# Configure the default for all requests:
185177
client = Openlayer(
@@ -188,7 +180,7 @@ client = Openlayer(
188180
)
189181

190182
# Or, configure per-request:
191-
client.with_options(max_retries=5).inference_pipelines.data.stream(
183+
client.with_options(max_retries = 5).inference_pipelines.data.stream(
192184
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
193185
config={
194186
"input_variable_names": ["user_query"],
@@ -197,15 +189,13 @@ client.with_options(max_retries=5).inference_pipelines.data.stream(
197189
"cost_column_name": "cost",
198190
"timestamp_column_name": "timestamp",
199191
},
200-
rows=[
201-
{
202-
"user_query": "what's the meaning of life?",
203-
"output": "42",
204-
"tokens": 7,
205-
"cost": 0.02,
206-
"timestamp": 1620000000,
207-
}
208-
],
192+
rows=[{
193+
"user_query": "what's the meaning of life?",
194+
"output": "42",
195+
"tokens": 7,
196+
"cost": 0.02,
197+
"timestamp": 1620000000,
198+
}],
209199
)
210200
```
211201

@@ -215,7 +205,7 @@ By default requests time out after 1 minute. You can configure this with a `time
215205
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object:
216206

217207
```python
218-
from openlayer import Openlayer
208+
from openlayer-test import Openlayer
219209

220210
# Configure the default for all requests:
221211
client = Openlayer(
@@ -229,7 +219,7 @@ client = Openlayer(
229219
)
230220

231221
# Override per-request:
232-
client.with_options(timeout=5.0).inference_pipelines.data.stream(
222+
client.with_options(timeout = 5.0).inference_pipelines.data.stream(
233223
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
234224
config={
235225
"input_variable_names": ["user_query"],
@@ -238,15 +228,13 @@ client.with_options(timeout=5.0).inference_pipelines.data.stream(
238228
"cost_column_name": "cost",
239229
"timestamp_column_name": "timestamp",
240230
},
241-
rows=[
242-
{
243-
"user_query": "what's the meaning of life?",
244-
"output": "42",
245-
"tokens": 7,
246-
"cost": 0.02,
247-
"timestamp": 1620000000,
248-
}
249-
],
231+
rows=[{
232+
"user_query": "what's the meaning of life?",
233+
"output": "42",
234+
"tokens": 7,
235+
"cost": 0.02,
236+
"timestamp": 1620000000,
237+
}],
250238
)
251239
```
252240

@@ -283,7 +271,7 @@ if response.my_field is None:
283271
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
284272

285273
```py
286-
from openlayer import Openlayer
274+
from openlayer-test import Openlayer
287275

288276
client = Openlayer()
289277
response = client.inference_pipelines.data.with_raw_response.stream(
@@ -309,9 +297,9 @@ data = response.parse() # get the object that `inference_pipelines.data.stream(
309297
print(data.success)
310298
```
311299

312-
These methods return an [`APIResponse`](https://github.com/openlayer-ai/openlayer-python/tree/main/src/openlayer/_response.py) object.
300+
These methods return an [`APIResponse`](https://github.com/openlayer-ai/openlayer-python/tree/main/src/openlayer-test/_response.py) object.
313301

314-
The async client returns an [`AsyncAPIResponse`](https://github.com/openlayer-ai/openlayer-python/tree/main/src/openlayer/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
302+
The async client returns an [`AsyncAPIResponse`](https://github.com/openlayer-ai/openlayer-python/tree/main/src/openlayer-test/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
315303

316304
#### `.with_streaming_response`
317305

@@ -329,20 +317,18 @@ with client.inference_pipelines.data.with_streaming_response.stream(
329317
"cost_column_name": "cost",
330318
"timestamp_column_name": "timestamp",
331319
},
332-
rows=[
333-
{
334-
"user_query": "what's the meaning of life?",
335-
"output": "42",
336-
"tokens": 7,
337-
"cost": 0.02,
338-
"timestamp": 1620000000,
339-
}
340-
],
341-
) as response:
342-
print(response.headers.get("X-My-Header"))
320+
rows=[{
321+
"user_query": "what's the meaning of life?",
322+
"output": "42",
323+
"tokens": 7,
324+
"cost": 0.02,
325+
"timestamp": 1620000000,
326+
}],
327+
) as response :
328+
print(response.headers.get('X-My-Header'))
343329

344330
for line in response.iter_lines():
345-
print(line)
331+
print(line)
346332
```
347333

348334
The context manager is required so that the response will reliably be closed.
@@ -391,15 +377,12 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
391377
- Additional [advanced](https://www.python-httpx.org/advanced/#client-instances) functionality
392378

393379
```python
394-
from openlayer import Openlayer, DefaultHttpxClient
380+
from openlayer-test import Openlayer, DefaultHttpxClient
395381

396382
client = Openlayer(
397383
# Or use the `OPENLAYER_BASE_URL` env var
398384
base_url="http://my.test.server.example.com:8083",
399-
http_client=DefaultHttpxClient(
400-
proxies="http://my.test.proxy.example.com",
401-
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
402-
),
385+
http_client=DefaultHttpxClient(proxies="http://my.test.proxy.example.com", transport=httpx.HTTPTransport(local_address="0.0.0.0")),
403386
)
404387
```
405388

0 commit comments

Comments
 (0)