Skip to content

Commit

Permalink
feat!: use microgenerator (#76)
Browse files Browse the repository at this point in the history
* Adds tutorials using Cloud Client [(#930)](#930)

* Adds tutorials.

* Removes unused enumerate

* Adds one more tutorial as well as fixes some copy/paste typos. [(#933)](#933)

* Adds new examples, replaces markdown with restructured text [(#945)](#945)

* Adds new examples, replaces markdown with restructured text

* Address review feedback

* Use videos from pubilc bucket, update to new client library.

* Style nit

* Updates requirements [(#952)](#952)

* Fix README rst links [(#962)](#962)

* Fix README rst links

* Update all READMEs

* change the usage file sample [(#958)](#958)

since the file does not exist. Propose to use the same one as the tutorial: demomaker/gbikes_dinosaur.mp4

* Updates examples for video [(#968)](#968)

* Auto-update dependencies. [(#1093)](#1093)

* Auto-update dependencies.

* Fix storage notification poll sample

Change-Id: I6afbc79d15e050531555e4c8e51066996717a0f3

* Fix spanner samples

Change-Id: I40069222c60d57e8f3d3878167591af9130895cb

* Drop coverage because it's not useful

Change-Id: Iae399a7083d7866c3c7b9162d0de244fbff8b522

* Try again to fix flaky logging test

Change-Id: I6225c074701970c17c426677ef1935bb6d7e36b4

* Update all generated readme auth instructions [(#1121)](#1121)

Change-Id: I03b5eaef8b17ac3dc3c0339fd2c7447bd3e11bd2

* Auto-update dependencies. [(#1123)](#1123)

* Video v1beta2 [(#1088)](#1088)

* update analyze_safe_search

* update analyze_shots

* update explicit_content_detection and test

* update fece detection

* update label detection (path)

* update label detection (file)

* flake

* safe search --> explicit content

* update faces tutorial

* update client library quickstart

* update shotchange tutorial

* update labels tutorial

* correct spelling

* correction start_time_offset

* import order

* rebased

* Added Link to Python Setup Guide [(#1158)](#1158)

* Update Readme.rst to add Python setup guide

As requested in b/64770713.

This sample is linked in documentation https://cloud.google.com/bigtable/docs/scaling, and it would make more sense to update the guide here than in the documentation.

* Update README.rst

* Update README.rst

* Update README.rst

* Update README.rst

* Update README.rst

* Update install_deps.tmpl.rst

* Updated readmegen scripts and re-generated related README files

* Fixed the lint error

* Tweak doc/help strings for sample tools  [(#1160)](#1160)

* Corrected copy-paste on doc string

* Updated doc/help string to be more specific to labels tool

* Made shotchange doc/help string more specific

* Tweaked doc/help string to indicate no arg expected

* Adjusted import order to satisfy flake8

* Wrapped doc string to 79 chars to flake8 correctly

* Adjusted import order to pass flake8 test

* Auto-update dependencies. [(#1186)](#1186)

* update samples to v1 [(#1221)](#1221)

* update samples to v1

* replace while loop with operation.result(timeout)

* addressing review comments

* flake

* flake

* Added "Open in Cloud Shell" buttons to README files [(#1254)](#1254)

* Auto-update dependencies. [(#1377)](#1377)

* Auto-update dependencies.

* Update requirements.txt

* Auto-update dependencies.

* Regenerate the README files and fix the Open in Cloud Shell link for some samples [(#1441)](#1441)

* Update READMEs to fix numbering and add git clone [(#1464)](#1464)

* Video Intelligence region tag update [(#1639)](#1639)

* Auto-update dependencies. [(#1658)](#1658)

* Auto-update dependencies.

* Rollback appengine/standard/bigquery/.

* Rollback appengine/standard/iap/.

* Rollback bigtable/metricscaler.

* Rolledback appengine/flexible/datastore.

* Rollback dataproc/

* Rollback jobs/api_client

* Rollback vision/cloud-client.

* Rollback functions/ocr/app.

* Rollback iot/api-client/end_to_end_example.

* Rollback storage/cloud-client.

* Rollback kms/api-client.

* Rollback dlp/

* Rollback bigquery/cloud-client.

* Rollback iot/api-client/manager.

* Rollback appengine/flexible/cloudsql_postgresql.

* Use explicit URIs for Video Intelligence sample tests [(#1743)](#1743)

* Auto-update dependencies. [(#1846)](#1846)

ACK, merging.

* Longer timeouts to address intermittent failures [(#1871)](#1871)

* Auto-update dependencies. [(#1980)](#1980)

* Auto-update dependencies.

* Update requirements.txt

* Update requirements.txt

* replace demomaker with cloud-samples-data/video for video intelligenc… [(#2162)](#2162)

* replace demomaker with cloud-samples-data/video for video intelligence samples

* flake

* Adds updates for samples profiler ... vision [(#2439)](#2439)

* Auto-update dependencies. [(#2005)](#2005)

* Auto-update dependencies.

* Revert update of appengine/flexible/datastore.

* revert update of appengine/flexible/scipy

* revert update of bigquery/bqml

* revert update of bigquery/cloud-client

* revert update of bigquery/datalab-migration

* revert update of bigtable/quickstart

* revert update of compute/api

* revert update of container_registry/container_analysis

* revert update of dataflow/run_template

* revert update of datastore/cloud-ndb

* revert update of dialogflow/cloud-client

* revert update of dlp

* revert update of functions/imagemagick

* revert update of functions/ocr/app

* revert update of healthcare/api-client/fhir

* revert update of iam/api-client

* revert update of iot/api-client/gcs_file_to_device

* revert update of iot/api-client/mqtt_example

* revert update of language/automl

* revert update of run/image-processing

* revert update of vision/automl

* revert update testing/requirements.txt

* revert update of vision/cloud-client/detect

* revert update of vision/cloud-client/product_search

* revert update of jobs/v2/api_client

* revert update of jobs/v3/api_client

* revert update of opencensus

* revert update of translate/cloud-client

* revert update to speech/cloud-client

Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com>
Co-authored-by: Doug Mahugh <dmahugh@gmail.com>

* chore(deps): update dependency google-cloud-videointelligence to v1.14.0 [(#3169)](#3169)

* Simplify noxfile setup. [(#2806)](#2806)

* chore(deps): update dependency requests to v2.23.0

* Simplify noxfile and add version control.

* Configure appengine/standard to only test Python 2.7.

* Update Kokokro configs to match noxfile.

* Add requirements-test to each folder.

* Remove Py2 versions from everything execept appengine/standard.

* Remove conftest.py.

* Remove appengine/standard/conftest.py

* Remove 'no-sucess-flaky-report' from pytest.ini.

* Add GAE SDK back to appengine/standard tests.

* Fix typo.

* Roll pytest to python 2 version.

* Add a bunch of testing requirements.

* Remove typo.

* Add appengine lib directory back in.

* Add some additional requirements.

* Fix issue with flake8 args.

* Even more requirements.

* Readd appengine conftest.py.

* Add a few more requirements.

* Even more Appengine requirements.

* Add webtest for appengine/standard/mailgun.

* Add some additional requirements.

* Add workaround for issue with mailjet-rest.

* Add responses for appengine/standard/mailjet.

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* fix: changes positional to named pararameters in Video samples [(#4017)](#4017)

Changes calls to `VideoClient.annotate_video()` so that GCS URIs are provided as named parameters.

Example:
```
operation = video_client.annotate_video(path, features=features)
```
Becomes:
```
operation = video_client.annotate_video(input_uri=path, features=features)
```

* Update dependency google-cloud-videointelligence to v1.15.0 [(#4041)](#4041)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [google-cloud-videointelligence](https://togithub.com/googleapis/python-videointelligence) | minor | `==1.14.0` -> `==1.15.0` |

---

### Release Notes

<details>
<summary>googleapis/python-videointelligence</summary>

### [`v1.15.0`](https://togithub.com/googleapis/python-videointelligence/blob/master/CHANGELOG.md#&#8203;1150-httpswwwgithubcomgoogleapispython-videointelligencecomparev1140v1150-2020-06-09)

[Compare Source](https://togithub.com/googleapis/python-videointelligence/compare/v1.14.0...v1.15.0)

##### Features

-   add support for streaming automl action recognition in v1p3beta1; make 'features' a positional param for annotate_video in betas ([#&#8203;31](https://www.github.com/googleapis/python-videointelligence/issues/31)) ([586f920](https://www.github.com/googleapis/python-videointelligence/commit/586f920a1932e1a813adfed500502fba0ff5edb7)), closes [#&#8203;517](https://www.github.com/googleapis/python-videointelligence/issues/517) [#&#8203;538](https://www.github.com/googleapis/python-videointelligence/issues/538) [#&#8203;565](https://www.github.com/googleapis/python-videointelligence/issues/565) [#&#8203;576](https://www.github.com/googleapis/python-videointelligence/issues/576) [#&#8203;506](https://www.github.com/googleapis/python-videointelligence/issues/506) [#&#8203;586](https://www.github.com/googleapis/python-videointelligence/issues/586) [#&#8203;585](https://www.github.com/googleapis/python-videointelligence/issues/585)

</details>

---

### Renovate configuration

:date: **Schedule**: At any time (no schedule defined).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples).

* chore(deps): update dependency pytest to v5.4.3 [(#4279)](#4279)

* chore(deps): update dependency pytest to v5.4.3

* specify pytest for python 2 in appengine

Co-authored-by: Leah Cole <coleleah@google.com>

* Update dependency pytest to v6 [(#4390)](#4390)

* chore: pin sphinx

* chore: adds samples templates

* chore: temporarily pins sphinx

* chore: blacken noxfile

* chore: lints

* chore(deps): update dependency google-cloud-videointelligence to v1.16.0 [(#4798)](#4798)

* chore: fixes flaky tests

* chore(deps): update dependency pytest to v6.1.1 [(#4761)](#4761)

* chore(deps): update dependency pytest to v6.1.2 [(#4921)](#4921)

Co-authored-by: Charles Engelke <engelke@google.com>

* chore: updates samples templates

* chore: cleans up merge conflicts

* chore: blacken

* feat!: use microgenerator

* docs: update samples for microgenerator client

* docs: updates shotchange samples to microgen

* chore: deletes temp files

* chore: lint and blacken

* Update UPGRADING.md

Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com>

* Update setup.py

Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com>

Co-authored-by: Gus Class <gguuss@gmail.com>
Co-authored-by: Bill Prin <waprin@gmail.com>
Co-authored-by: florencep <florenceperot@google.com>
Co-authored-by: DPE bot <dpebot@google.com>
Co-authored-by: Jon Wayne Parrott <jonwayne@google.com>
Co-authored-by: Yu-Han Liu <dizcology@hotmail.com>
Co-authored-by: michaelawyu <chenyumic@google.com>
Co-authored-by: Perry Stoll <pstoll@users.noreply.github.com>
Co-authored-by: Frank Natividad <frankyn@users.noreply.github.com>
Co-authored-by: michaelawyu <michael.a.w.yu@hotmail.com>
Co-authored-by: Alix Hamilton <ajhamilton@google.com>
Co-authored-by: Charles Engelke <github@engelke.com>
Co-authored-by: Yu-Han Liu <yuhanliu@google.com>
Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com>
Co-authored-by: Doug Mahugh <dmahugh@gmail.com>
Co-authored-by: WhiteSource Renovate <bot@renovateapp.com>
Co-authored-by: Eric Schmidt <erschmid@google.com>
Co-authored-by: Leah Cole <coleleah@google.com>
Co-authored-by: gcf-merge-on-green[bot] <60162190+gcf-merge-on-green[bot]@users.noreply.github.com>
Co-authored-by: Charles Engelke <engelke@google.com>
Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com>
  • Loading branch information
22 people authored Nov 19, 2020
1 parent 5397bbf commit 17a81f5
Show file tree
Hide file tree
Showing 19 changed files with 339 additions and 287 deletions.
129 changes: 72 additions & 57 deletions videointelligence/samples/analyze/analyze.py

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions videointelligence/samples/analyze/analyze_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,15 +74,15 @@ def test_speech_transcription(capsys):
def test_detect_text_gcs(capsys):
analyze.video_detect_text_gcs("gs://cloud-samples-data/video/googlework_tiny.mp4")
out, _ = capsys.readouterr()
assert 'Text' in out
assert "Text" in out


# Flaky timeout
@pytest.mark.flaky(max_runs=3, min_passes=1)
def test_detect_text(capsys):
analyze.video_detect_text("resources/googlework_tiny.mp4")
out, _ = capsys.readouterr()
assert 'Text' in out
assert "Text" in out


# Flaky timeout
Expand Down
161 changes: 83 additions & 78 deletions videointelligence/samples/analyze/beta_snippets.py

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions videointelligence/samples/analyze/beta_snippets_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
# limitations under the License.

import os
from urllib.request import urlopen
import uuid

import backoff
from google.api_core.exceptions import Conflict
from google.cloud import storage
import pytest
from six.moves.urllib.request import urlopen

import beta_snippets

Expand Down Expand Up @@ -55,7 +55,7 @@ def video_path(tmpdir_factory):
@pytest.fixture(scope="function")
def bucket():
# Create a temporaty bucket to store annotation output.
bucket_name = f'tmp-{uuid.uuid4().hex}'
bucket_name = f"tmp-{uuid.uuid4().hex}"
storage_client = storage.Client()
bucket = storage_client.create_bucket(bucket_name)

Expand Down Expand Up @@ -128,7 +128,7 @@ def test_detect_text(capsys):
in_file = "./resources/googlework_tiny.mp4"
beta_snippets.video_detect_text(in_file)
out, _ = capsys.readouterr()
assert 'Text' in out
assert "Text" in out


# Flaky timeout
Expand All @@ -137,7 +137,7 @@ def test_detect_text_gcs(capsys):
in_file = "gs://python-docs-samples-tests/video/googlework_tiny.mp4"
beta_snippets.video_detect_text_gcs(in_file)
out, _ = capsys.readouterr()
assert 'Text' in out
assert "Text" in out


# Flaky InvalidArgument
Expand Down
34 changes: 17 additions & 17 deletions videointelligence/samples/analyze/noxfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,28 +37,25 @@

TEST_CONFIG = {
# You can opt out from the test for specific Python versions.
'ignored_versions': ["2.7"],

"ignored_versions": ["2.7"],
# Old samples are opted out of enforcing Python type hints
# All new samples should feature them
'enforce_type_hints': False,

"enforce_type_hints": False,
# An envvar key for determining the project id to use. Change it
# to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a
# build specific Cloud project. You can also use your own string
# to use your own Cloud project.
'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT',
"gcloud_project_env": "GOOGLE_CLOUD_PROJECT",
# 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT',

# A dictionary you want to inject into your test. Don't put any
# secrets here. These values will override predefined values.
'envs': {},
"envs": {},
}


try:
# Ensure we can import noxfile_config in the project's directory.
sys.path.append('.')
sys.path.append(".")
from noxfile_config import TEST_CONFIG_OVERRIDE
except ImportError as e:
print("No user noxfile_config found: detail: {}".format(e))
Expand All @@ -73,12 +70,12 @@ def get_pytest_env_vars():
ret = {}

# Override the GCLOUD_PROJECT and the alias.
env_key = TEST_CONFIG['gcloud_project_env']
env_key = TEST_CONFIG["gcloud_project_env"]
# This should error out if not set.
ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key]
ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key]

# Apply user supplied envs.
ret.update(TEST_CONFIG['envs'])
ret.update(TEST_CONFIG["envs"])
return ret


Expand All @@ -87,7 +84,7 @@ def get_pytest_env_vars():
ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"]

# Any default versions that should be ignored.
IGNORED_VERSIONS = TEST_CONFIG['ignored_versions']
IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"]

TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS])

Expand Down Expand Up @@ -136,7 +133,7 @@ def _determine_local_import_names(start_dir):

@nox.session
def lint(session):
if not TEST_CONFIG['enforce_type_hints']:
if not TEST_CONFIG["enforce_type_hints"]:
session.install("flake8", "flake8-import-order")
else:
session.install("flake8", "flake8-import-order", "flake8-annotations")
Expand All @@ -145,9 +142,11 @@ def lint(session):
args = FLAKE8_COMMON_ARGS + [
"--application-import-names",
",".join(local_names),
"."
".",
]
session.run("flake8", *args)


#
# Black
#
Expand All @@ -160,6 +159,7 @@ def blacken(session):

session.run("black", *python_files)


#
# Sample Tests
#
Expand Down Expand Up @@ -199,9 +199,9 @@ def py(session):
if session.python in TESTED_VERSIONS:
_session_tests(session)
else:
session.skip("SKIPPED: {} tests are disabled for this sample.".format(
session.python
))
session.skip(
"SKIPPED: {} tests are disabled for this sample.".format(session.python)
)


#
Expand Down
16 changes: 9 additions & 7 deletions videointelligence/samples/analyze/video_detect_faces_beta.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,16 +27,18 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"):
input_content = f.read()

# Configure the request
config = videointelligence.types.FaceDetectionConfig(
config = videointelligence.FaceDetectionConfig(
include_bounding_boxes=True, include_attributes=True
)
context = videointelligence.types.VideoContext(face_detection_config=config)
context = videointelligence.VideoContext(face_detection_config=config)

# Start the asynchronous request
operation = client.annotate_video(
input_content=input_content,
features=[videointelligence.enums.Feature.FACE_DETECTION],
video_context=context,
request={
"features": [videointelligence.Feature.FACE_DETECTION],
"input_content": input_content,
"video_context": context,
}
)

print("\nProcessing video for face detection annotations.")
Expand All @@ -53,9 +55,9 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"):
print(
"Segment: {}s to {}s".format(
track.segment.start_time_offset.seconds
+ track.segment.start_time_offset.nanos / 1e9,
+ track.segment.start_time_offset.microseconds / 1e6,
track.segment.end_time_offset.seconds
+ track.segment.end_time_offset.nanos / 1e9,
+ track.segment.end_time_offset.microseconds / 1e6,
)
)

Expand Down
16 changes: 9 additions & 7 deletions videointelligence/samples/analyze/video_detect_faces_gcs_beta.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,16 +22,18 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
client = videointelligence.VideoIntelligenceServiceClient()

# Configure the request
config = videointelligence.types.FaceDetectionConfig(
config = videointelligence.FaceDetectionConfig(
include_bounding_boxes=True, include_attributes=True
)
context = videointelligence.types.VideoContext(face_detection_config=config)
context = videointelligence.VideoContext(face_detection_config=config)

# Start the asynchronous request
operation = client.annotate_video(
input_uri=gcs_uri,
features=[videointelligence.enums.Feature.FACE_DETECTION],
video_context=context,
request={
"features": [videointelligence.Feature.FACE_DETECTION],
"input_uri": gcs_uri,
"video_context": context,
}
)

print("\nProcessing video for face detection annotations.")
Expand All @@ -48,9 +50,9 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
print(
"Segment: {}s to {}s".format(
track.segment.start_time_offset.seconds
+ track.segment.start_time_offset.nanos / 1e9,
+ track.segment.start_time_offset.microseconds / 1e6,
track.segment.end_time_offset.seconds
+ track.segment.end_time_offset.nanos / 1e9,
+ track.segment.end_time_offset.microseconds / 1e6,
)
)

Expand Down
16 changes: 10 additions & 6 deletions videointelligence/samples/analyze/video_detect_logo.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,11 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):

with io.open(local_file_path, "rb") as f:
input_content = f.read()
features = [videointelligence.enums.Feature.LOGO_RECOGNITION]
features = [videointelligence.Feature.LOGO_RECOGNITION]

operation = client.annotate_video(input_content=input_content, features=features)
operation = client.annotate_video(
request={"features": features, "input_content": input_content}
)

print(u"Waiting for operation to complete...")
response = operation.result()
Expand All @@ -53,13 +55,13 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):
print(
u"\n\tStart Time Offset : {}.{}".format(
track.segment.start_time_offset.seconds,
track.segment.start_time_offset.nanos,
track.segment.start_time_offset.microseconds * 1000,
)
)
print(
u"\tEnd Time Offset : {}.{}".format(
track.segment.end_time_offset.seconds,
track.segment.end_time_offset.nanos,
track.segment.end_time_offset.microseconds * 1000,
)
)
print(u"\tConfidence : {}".format(track.confidence))
Expand Down Expand Up @@ -91,12 +93,14 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):
for segment in logo_recognition_annotation.segments:
print(
u"\n\tStart Time Offset : {}.{}".format(
segment.start_time_offset.seconds, segment.start_time_offset.nanos,
segment.start_time_offset.seconds,
segment.start_time_offset.microseconds * 1000,
)
)
print(
u"\tEnd Time Offset : {}.{}".format(
segment.end_time_offset.seconds, segment.end_time_offset.nanos,
segment.end_time_offset.seconds,
segment.end_time_offset.microseconds * 1000,
)
)

Expand Down
16 changes: 10 additions & 6 deletions videointelligence/samples/analyze/video_detect_logo_gcs.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,11 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):

client = videointelligence.VideoIntelligenceServiceClient()

features = [videointelligence.enums.Feature.LOGO_RECOGNITION]
features = [videointelligence.Feature.LOGO_RECOGNITION]

operation = client.annotate_video(input_uri=input_uri, features=features)
operation = client.annotate_video(
request={"features": features, "input_uri": input_uri}
)

print(u"Waiting for operation to complete...")
response = operation.result()
Expand All @@ -49,13 +51,13 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):
print(
u"\n\tStart Time Offset : {}.{}".format(
track.segment.start_time_offset.seconds,
track.segment.start_time_offset.nanos,
track.segment.start_time_offset.microseconds * 1000,
)
)
print(
u"\tEnd Time Offset : {}.{}".format(
track.segment.end_time_offset.seconds,
track.segment.end_time_offset.nanos,
track.segment.end_time_offset.microseconds * 1000,
)
)
print(u"\tConfidence : {}".format(track.confidence))
Expand Down Expand Up @@ -86,12 +88,14 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):
for segment in logo_recognition_annotation.segments:
print(
u"\n\tStart Time Offset : {}.{}".format(
segment.start_time_offset.seconds, segment.start_time_offset.nanos,
segment.start_time_offset.seconds,
segment.start_time_offset.microseconds * 1000,
)
)
print(
u"\tEnd Time Offset : {}.{}".format(
segment.end_time_offset.seconds, segment.end_time_offset.nanos,
segment.end_time_offset.seconds,
segment.end_time_offset.microseconds * 1000,
)
)

Expand Down
12 changes: 7 additions & 5 deletions videointelligence/samples/analyze/video_detect_person_beta.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,11 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"):

# Start the asynchronous request
operation = client.annotate_video(
input_content=input_content,
features=[videointelligence.enums.Feature.PERSON_DETECTION],
video_context=context,
request={
"features": [videointelligence.Feature.PERSON_DETECTION],
"input_content": input_content,
"video_context": context,
}
)

print("\nProcessing video for person detection annotations.")
Expand All @@ -55,9 +57,9 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"):
print(
"Segment: {}s to {}s".format(
track.segment.start_time_offset.seconds
+ track.segment.start_time_offset.nanos / 1e9,
+ track.segment.start_time_offset.microseconds / 1e6,
track.segment.end_time_offset.seconds
+ track.segment.end_time_offset.nanos / 1e9,
+ track.segment.end_time_offset.microseconds / 1e6,
)
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,11 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):

# Start the asynchronous request
operation = client.annotate_video(
input_uri=gcs_uri,
features=[videointelligence.enums.Feature.PERSON_DETECTION],
video_context=context,
request={
"features": [videointelligence.Feature.PERSON_DETECTION],
"input_uri": gcs_uri,
"video_context": context,
}
)

print("\nProcessing video for person detection annotations.")
Expand All @@ -50,9 +52,9 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
print(
"Segment: {}s to {}s".format(
track.segment.start_time_offset.seconds
+ track.segment.start_time_offset.nanos / 1e9,
+ track.segment.start_time_offset.microseconds / 1e6,
track.segment.end_time_offset.seconds
+ track.segment.end_time_offset.nanos / 1e9,
+ track.segment.end_time_offset.microseconds / 1e6,
)
)

Expand Down
Loading

0 comments on commit 17a81f5

Please sign in to comment.