Skip to content

docs: fix typos #1282

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .kokoro/presubmit-against-pubsublite-samples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ env | grep KOKORO
# Install nox
python3.9 -m pip install --upgrade --quiet nox

# Use secrets acessor service account to get secrets
# Use secrets accessor service account to get secrets
if [[ -f "${KOKORO_GFILE_DIR}/secrets_viewer_service_account.json" ]]; then
gcloud auth activate-service-account \
--key-file="${KOKORO_GFILE_DIR}/secrets_viewer_service_account.json" \
Expand Down
2 changes: 1 addition & 1 deletion .kokoro/test-samples-impl.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ env | grep KOKORO
# Install nox
python3.9 -m pip install --upgrade --quiet nox

# Use secrets acessor service account to get secrets
# Use secrets accessor service account to get secrets
if [[ -f "${KOKORO_GFILE_DIR}/secrets_viewer_service_account.json" ]]; then
gcloud auth activate-service-account \
--key-file="${KOKORO_GFILE_DIR}/secrets_viewer_service_account.json" \
Expand Down
2 changes: 1 addition & 1 deletion .kokoro/trampoline_v2.sh
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ docker_flags=(
# Use the host network.
"--network=host"

# Run in priviledged mode. We are not using docker for sandboxing or
# Run in privileged mode. We are not using docker for sandboxing or
# isolation, just for packaging our dev tools.
"--privileged"

Expand Down
10 changes: 5 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@
* flaky samples tests ([#263](https://www.github.com/googleapis/python-pubsub/issues/263)) ([3d6a29d](https://www.github.com/googleapis/python-pubsub/commit/3d6a29de07cc09be663c90a3333f4cd33633994f))
* Modify synth.py to update grpc transport options ([#266](https://www.github.com/googleapis/python-pubsub/issues/266)) ([41dcd30](https://www.github.com/googleapis/python-pubsub/commit/41dcd30636168f3dd1248f1d99170d531fc9bcb8))
* pass anonymous credentials for emulator ([#250](https://www.github.com/googleapis/python-pubsub/issues/250)) ([8eed8e1](https://www.github.com/googleapis/python-pubsub/commit/8eed8e16019510dc8b20fb6b009d61a7ac532d26))
* remove grpc send/recieve limits ([#259](https://www.github.com/googleapis/python-pubsub/issues/259)) ([fd2840c](https://www.github.com/googleapis/python-pubsub/commit/fd2840c10f92b03da7f4b40ac69c602220757c0a))
* remove grpc send/receive limits ([#259](https://www.github.com/googleapis/python-pubsub/issues/259)) ([fd2840c](https://www.github.com/googleapis/python-pubsub/commit/fd2840c10f92b03da7f4b40ac69c602220757c0a))

## [2.2.0](https://www.github.com/googleapis/python-pubsub/compare/v2.1.0...v2.2.0) (2020-11-16)

Expand Down Expand Up @@ -861,7 +861,7 @@ This is the last release that supports Python 2.7 and 3.5.


### Internal / Testing Changes
- Re-generated service implementaton using synth: removed experimental notes from the RetryPolicy and filtering features in anticipation of GA, added DetachSubscription (experimental) ([#114](https://github.com/googleapis/python-pubsub/pull/114)) ([0132a46](https://github.com/googleapis/python-pubsub/commit/0132a4680e0727ce45d5e27d98ffc9f3541a0962))
- Re-generated service implementation using synth: removed experimental notes from the RetryPolicy and filtering features in anticipation of GA, added DetachSubscription (experimental) ([#114](https://github.com/googleapis/python-pubsub/pull/114)) ([0132a46](https://github.com/googleapis/python-pubsub/commit/0132a4680e0727ce45d5e27d98ffc9f3541a0962))
- Incorporate will_accept() checks into publish() ([#108](https://github.com/googleapis/python-pubsub/pull/108)) ([6c7677e](https://github.com/googleapis/python-pubsub/commit/6c7677ecb259672bbb9b6f7646919e602c698570))

## [1.5.0](https://www.github.com/googleapis/python-pubsub/compare/v1.4.3...v1.5.0) (2020-05-04)
Expand Down Expand Up @@ -984,7 +984,7 @@ This is the last release that supports Python 2.7 and 3.5.
### Documentation
- Update docstrings for client kwargs and fix return types uris ([#9037](https://github.com/googleapis/google-cloud-python/pull/9037))
- Remove CI for gh-pages, use googleapis.dev for api_core refs. ([#9085](https://github.com/googleapis/google-cloud-python/pull/9085))
- Remove compatability badges from READMEs. ([#9035](https://github.com/googleapis/google-cloud-python/pull/9035))
- Remove compatibility badges from READMEs. ([#9035](https://github.com/googleapis/google-cloud-python/pull/9035))

### Internal / Testing Changes
- Add dead-letter-policy field in preparation for its implementation (via synth) ([#9078](https://github.com/googleapis/google-cloud-python/pull/9078))
Expand Down Expand Up @@ -1023,7 +1023,7 @@ This is the last release that supports Python 2.7 and 3.5.


### Implementation Changes
- Accomodate new location of 'IAMPolicyStub' (via synth). ([#8680](https://github.com/googleapis/google-cloud-python/pull/8680))
- Accommodate new location of 'IAMPolicyStub' (via synth). ([#8680](https://github.com/googleapis/google-cloud-python/pull/8680))
- Use kwargs in test_subscriber_client ([#8414](https://github.com/googleapis/google-cloud-python/pull/8414))

### New Features
Expand Down Expand Up @@ -1308,7 +1308,7 @@ This is the last release that supports Python 2.7 and 3.5.
### Implementation changes

- Drop leased messages after flow_control.max_lease_duration has passed. (#5020)
- Fix mantain leases to not modack messages it just dropped (#5045)
- Fix maintain leases to not modack messages it just dropped (#5045)
- Avoid race condition in maintain_leases by copying leased_messages (#5035)
- Retry subscription stream on InternalServerError, Unknown, and GatewayTimeout (#5021)
- Use the rpc's status to determine when to exit the request generator thread (#5054)
Expand Down
2 changes: 1 addition & 1 deletion docs/pubsub/publisher/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ an instance of :class:`~.pubsub_v1.publisher.futures.Future`.

The returned future conforms for the most part to the interface of
the standard library's :class:`~concurrent.futures.Future`, but might not
be usable in all cases which expect that exact implementaton.
be usable in all cases which expect that exact implementation.

You can use this to ensure that the publish succeeded:

Expand Down
2 changes: 1 addition & 1 deletion google/cloud/pubsub_v1/futures.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
class Future(concurrent.futures.Future, google.api_core.future.Future):
"""Encapsulation of the asynchronous execution of an action.

This object is returned from asychronous Pub/Sub calls, and is the
This object is returned from asynchronous Pub/Sub calls, and is the
interface to determine the status of those calls.

This object should not be created directly, but is returned by other
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def __init__(self, message: PubsubMessage):

# This will be set by `start_subscribe_span` method, if a publisher create span
# context was extracted from trace propagation. And will be used by spans like
# proces span to add links to the publisher create span.
# process span to add links to the publisher create span.
self._publisher_create_span_context: Optional[context.Context] = None

# This will be set by `start_subscribe_span` method and will be used
Expand Down
4 changes: 2 additions & 2 deletions google/cloud/pubsub_v1/publisher/_batch/thread.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,10 +84,10 @@ class Batch(base.Batch):
commit_when_full:
Whether to commit the batch when the batch is full.
commit_retry:
Designation of what errors, if any, should be retried when commiting
Designation of what errors, if any, should be retried when committing
the batch. If not provided, a default retry is used.
commit_timeout:
The timeout to apply when commiting the batch. If not provided, a default
The timeout to apply when committing the batch. If not provided, a default
timeout is used.
"""

Expand Down
4 changes: 2 additions & 2 deletions google/cloud/pubsub_v1/publisher/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ def api(self):
"""
msg = (
'The "api" property only exists for backward compatibility, access its '
'attributes directly thorugh the client instance (e.g. "client.foo" '
'attributes directly through the client instance (e.g. "client.foo" '
'instead of "client.api.foo").'
)
warnings.warn(msg, category=DeprecationWarning)
Expand Down Expand Up @@ -336,7 +336,7 @@ def publish( # type: ignore[override]
retry:
Designation of what errors, if any, should be retried. If `ordering_key`
is specified, the total retry deadline will be changed to "infinity".
If given, it overides any retry passed into the client through
If given, it overrides any retry passed into the client through
the ``publisher_options`` argument.
timeout:
The timeout for the RPC request. Can be used to override any timeout
Expand Down
2 changes: 1 addition & 1 deletion google/cloud/pubsub_v1/publisher/flow_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def add(self, message: MessageType) -> None:
del self._waiting[current_thread]

def release(self, message: MessageType) -> None:
"""Release a mesage from flow control.
"""Release a message from flow control.

Args:
message:
Expand Down
2 changes: 1 addition & 1 deletion google/cloud/pubsub_v1/publisher/futures.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@


class Future(futures.Future):
"""This future object is returned from asychronous Pub/Sub publishing
"""This future object is returned from asynchronous Pub/Sub publishing
calls.

Calling :meth:`result` will resolve the future by returning the message
Expand Down
6 changes: 3 additions & 3 deletions google/cloud/pubsub_v1/subscriber/_protocol/histogram.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def __init__(self, data: Optional[Dict[int, int]] = None):

Args:
data:
The data strucure to be used to store the underlying data. The default
The data structure to be used to store the underlying data. The default
is an empty dictionary. This can be set to a dictionary-like object if
required (for example, if a special object is needed for concurrency
reasons).
Expand Down Expand Up @@ -129,11 +129,11 @@ def add(self, value: Union[int, float]) -> None:
self._len += 1

def percentile(self, percent: Union[int, float]) -> int:
"""Return the value that is the Nth precentile in the histogram.
"""Return the value that is the Nth percentile in the histogram.

Args:
percent:
The precentile being sought. The default consumer implementations
The percentile being sought. The default consumer implementations
consistently use ``99``.

Returns:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def size(self) -> int:

def get(self) -> Optional["subscriber.message.Message"]:
"""Gets a message from the on-hold queue. A message with an ordering
key wont be returned if there's another message with the same key in
key won't be returned if there's another message with the same key in
flight.

Returns:
Expand Down
2 changes: 1 addition & 1 deletion google/cloud/pubsub_v1/subscriber/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def api(self):
"""
msg = (
'The "api" property only exists for backward compatibility, access its '
'attributes directly thorugh the client instance (e.g. "client.foo" '
'attributes directly through the client instance (e.g. "client.foo" '
'instead of "client.api.foo").'
)
warnings.warn(msg, category=DeprecationWarning)
Expand Down
6 changes: 3 additions & 3 deletions google/cloud/pubsub_v1/subscriber/message.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ def ack(self) -> None:
Acks in Pub/Sub are best effort. You should always
ensure that your processing code is idempotent, as you may
receive any given message more than once. If you need strong
guarantees about acks and re-deliveres, enable exactly-once
guarantees about acks and re-delivers, enable exactly-once
delivery on your subscription and use the `ack_with_response`
method instead. Exactly once delivery is a preview feature.
For more details, see:
Expand Down Expand Up @@ -372,7 +372,7 @@ def modify_ack_deadline(self, seconds: int) -> None:
The default implementation handles automatically modacking received messages for you;
you should not need to manually deal with setting ack deadlines. The exception case is
if you are implementing your own custom subclass of
:class:`~.pubsub_v1.subcriber._consumer.Consumer`.
:class:`~.pubsub_v1.subscriber._consumer.Consumer`.

Args:
seconds (int):
Expand All @@ -398,7 +398,7 @@ def modify_ack_deadline_with_response(self, seconds: int) -> "futures.Future":
The default implementation handles automatically modacking received messages for you;
you should not need to manually deal with setting ack deadlines. The exception case is
if you are implementing your own custom subclass of
:class:`~.pubsub_v1.subcriber._consumer.Consumer`.
:class:`~.pubsub_v1.subscriber._consumer.Consumer`.

If exactly-once delivery is NOT enabled on the subscription, the
future returns immediately with an AcknowledgeStatus.SUCCESS.
Expand Down
2 changes: 1 addition & 1 deletion google/cloud/pubsub_v1/subscriber/scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def shutdown(
try:
while True:
work_item = self._executor._work_queue.get(block=False)
if work_item is None: # Exceutor in shutdown mode.
if work_item is None: # Executor in shutdown mode.
continue
dropped_messages.append(work_item.args[0]) # type: ignore[index]
except queue.Empty:
Expand Down
2 changes: 1 addition & 1 deletion google/pubsub_v1/types/pubsub.py
Original file line number Diff line number Diff line change
Expand Up @@ -1137,7 +1137,7 @@ class Subscription(proto.Message):
analytics_hub_subscription_info (google.pubsub_v1.types.Subscription.AnalyticsHubSubscriptionInfo):
Output only. Information about the associated
Analytics Hub subscription. Only set if the
subscritpion is created by Analytics Hub.
subscription is created by Analytics Hub.
"""

class State(proto.Enum):
Expand Down
2 changes: 1 addition & 1 deletion owlbot.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,7 @@
)
if count < 9:
raise Exception(
"Default retry deadline not overriden for all publisher methods."
"Default retry deadline not overridden for all publisher methods."
)

# The namespace package declaration in google/cloud/__init__.py should be excluded
Expand Down
2 changes: 1 addition & 1 deletion samples/snippets/subscriber.py
Original file line number Diff line number Diff line change
Expand Up @@ -896,7 +896,7 @@ def receive_messages_with_blocking_shutdown(

def callback(message: pubsub_v1.subscriber.message.Message) -> None:
print(f"Received {message.data!r}.")
time.sleep(timeout + 3.0) # Pocess longer than streaming pull future timeout.
time.sleep(timeout + 3.0) # Process longer than streaming pull future timeout.
message.ack()
print(f"Done processing the message {message.data!r}.")

Expand Down
8 changes: 4 additions & 4 deletions tests/system.py
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ def test_managing_subscription_iam_policy(
def test_subscriber_not_leaking_open_sockets(
publisher, topic_path_base, subscription_path_base, cleanup, transport
):
# Make sure the topic and the supscription get deleted.
# Make sure the topic and the subscription get deleted.
# NOTE: Since subscriber client will be closed in the test, we should not
# use the shared `subscriber` fixture, but instead construct a new client
# in this test.
Expand Down Expand Up @@ -752,7 +752,7 @@ def callback2(message):
subscription_future.result() # block until shutdown completes

# There should be 7 messages left that were not yet processed and none of them
# should be a message that should have already been sucessfully processed in the
# should be a message that should have already been successfully processed in the
# first streaming pull.
assert len(remaining) == 7
assert not (set(processed_messages) & set(remaining)) # no re-delivery
Expand Down Expand Up @@ -867,7 +867,7 @@ def test_snapshot_seek_subscriber_permissions_sufficient(
)
subscriber_only_client = type(subscriber).from_service_account_file(filename)

# Publish two messages and create a snapshot inbetween.
# Publish two messages and create a snapshot in between.
_publish_messages(publisher, topic_path, batch_sizes=[1])
response = subscriber.pull(subscription=subscription_path, max_messages=10)
assert len(response.received_messages) == 1
Expand Down Expand Up @@ -968,7 +968,7 @@ def _publish_messages(publisher, topic_path, batch_sizes):
publish_futures.append(future)
time.sleep(0.1)

# wait untill all messages have been successfully published
# wait until all messages have been successfully published
for future in publish_futures:
future.result(timeout=30)

Expand Down
4 changes: 2 additions & 2 deletions tests/unit/pubsub_v1/publisher/batch/test_thread.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ def test_blocking__commit_wrong_messageid_length():
assert isinstance(future.exception(), exceptions.PublishError)


def test_block__commmit_api_error():
def test_block__commit_api_error():
batch = create_batch()
futures = (
batch.publish(
Expand All @@ -356,7 +356,7 @@ def test_block__commmit_api_error():
assert future.exception() == error


def test_block__commmit_retry_error():
def test_block__commit_retry_error():
batch = create_batch()
futures = (
batch.publish(
Expand Down
6 changes: 3 additions & 3 deletions tests/unit/pubsub_v1/publisher/test_flow_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ def test_blocking_on_overflow_until_free_capacity():
assert released_count == 2, "Exactly two threads should have been unblocked."


def test_error_if_mesage_would_block_indefinitely():
def test_error_if_message_would_block_indefinitely():
settings = types.PublishFlowControl(
message_limit=0, # simulate non-sane settings
byte_limit=1,
Expand Down Expand Up @@ -339,7 +339,7 @@ def test_threads_posting_large_messages_do_not_starve():
messages = [grpc_types.PubsubMessage(data=b"x" * 10)] * 10
_run_in_daemon(flow_controller.add, messages, adding_busy_done, action_pause=0.1)

# At the same time, gradually keep releasing the messages - the freeed up
# At the same time, gradually keep releasing the messages - the freed up
# capacity should be consumed by the large message, not the other small messages
# being added after it.
_run_in_daemon(
Expand Down Expand Up @@ -376,7 +376,7 @@ def test_blocked_messages_are_accepted_in_fifo_order():
)
flow_controller = FlowController(settings)

# It's OK if the message instance is shared, as flow controlelr is only concerned
# It's OK if the message instance is shared, as flow controller is only concerned
# with byte sizes and counts, and not with particular message instances.
message = grpc_types.PubsubMessage(data=b"x")

Expand Down
4 changes: 2 additions & 2 deletions tests/unit/pubsub_v1/publisher/test_publisher_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ def test_opentelemetry_publish(creds, span_exporter):
# publish() function and are deterministically expected to be in the
# list of exported spans. The Publish Create span and Publish RPC span
# are run async and end at a non-deterministic time. Hence,
# asserting that we have atleast two spans(flow control and batching span)
# asserting that we have at least two spans(flow control and batching span)
assert len(spans) >= 2
flow_control_span = None
batching_span = None
Expand Down Expand Up @@ -410,7 +410,7 @@ def init(self, *args, **kwargs):
def test_init_emulator(monkeypatch):
monkeypatch.setenv("PUBSUB_EMULATOR_HOST", "/foo/bar:123")
# NOTE: When the emulator host is set, a custom channel will be used, so
# no credentials (mock ot otherwise) can be passed in.
# no credentials (mock or otherwise) can be passed in.
client = publisher.Client()

# Establish that a gRPC request would attempt to hit the emulator host.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1684,7 +1684,7 @@ def test__on_response_modifies_ack_deadline():
# adjust message bookkeeping in leaser
fake_leaser_add(leaser, init_msg_count=0, assumed_msg_size=80)

# Actually run the method and chack that correct MODACK value is used.
# Actually run the method and check that correct MODACK value is used.
with mock.patch.object(
type(manager), "ack_deadline", new=mock.PropertyMock(return_value=18)
):
Expand Down
6 changes: 3 additions & 3 deletions tests/unit/pubsub_v1/subscriber/test_subscriber_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def init(self, *args, **kwargs):
def test_init_emulator(monkeypatch):
monkeypatch.setenv("PUBSUB_EMULATOR_HOST", "/baz/bacon:123")
# NOTE: When the emulator host is set, a custom channel will be used, so
# no credentials (mock ot otherwise) can be passed in.
# no credentials (mock or otherwise) can be passed in.
client = subscriber.Client()

# Establish that a gRPC request would attempt to hit the emulator host.
Expand Down Expand Up @@ -229,8 +229,8 @@ def test_context_manager_raises_if_closed(creds):
with mock.patch.object(client._transport.grpc_channel, "close"):
client.close()

expetect_msg = r"(?i).*closed.*cannot.*context manager.*"
with pytest.raises(RuntimeError, match=expetect_msg):
expect_msg = r"(?i).*closed.*cannot.*context manager.*"
with pytest.raises(RuntimeError, match=expect_msg):
with client:
pass # pragma: NO COVER

Expand Down
Loading