Skip to content

Commit

Permalink
Merge pull request #663 from roboflow/fix/seo_issues_with_docs
Browse files Browse the repository at this point in the history
Fix docs links
  • Loading branch information
PawelPeczek-Roboflow authored Sep 20, 2024
2 parents 1236205 + c7beb36 commit 5506f4b
Show file tree
Hide file tree
Showing 17 changed files with 34 additions and 40 deletions.
2 changes: 1 addition & 1 deletion docs/enterprise/active-learning/active_learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Active Learning data collection may be combined with different components of the
- self-hosted `inference` server - where data is collected while processing requests
- Roboflow hosted `inference` - where you let us make sure you get your predictions and data registered. No
infrastructure needs to run on your end, we take care of everything
- [Roboflow `workflows`](../../workflows/about.md) - our newest feature - supports [`ActiveLearningDataCollectionBlock`](../../workflows/active_learning.md)
- [Roboflow `workflows`](../../workflows/about.md) - our newest feature - supports [`Roboflow Dataset Upload block`](/workflows/blocks/roboflow_dataset_upload/)


## Sampling Strategies
Expand Down
1 change: 0 additions & 1 deletion docs/enterprise/active-learning/classes_based.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,3 @@ Here is an example of a configuration manifest for the close to threshold sampli
}
```

Learn how to [configure active learning](../active_learning.md#configuration) for your model.
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,3 @@ Here is an example of a configuration manifest for the close to threshold sampli
}
```

Learn how to [configure active learning](../active_learning.md#configuration) for your model.
2 changes: 0 additions & 2 deletions docs/enterprise/active-learning/detection_number.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,3 @@ This strategy is available for the following model types:
]
}
```

Learn how to [configure active learning](../active_learning.md#configuration) for your model.
2 changes: 0 additions & 2 deletions docs/enterprise/active-learning/random_sampling.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,3 @@ Here is an example of a configuration manifest for random sampling strategy:
]
}
```

Learn how to [configure active learning](../active_learning.md#configuration) for your model.
2 changes: 1 addition & 1 deletion docs/foundation/clip.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ In this guide, we will show:

- directly from `inference[clip]` package, integrating the model directly into your code
- using `inference` HTTP API (hosted locally, or on the Roboflow platform), integrating via HTTP protocol
- using `inference-sdk` package (`pip install inference-sdk`) and [`InferenceHTTPClient`](/docs/inference_sdk/http_client.md)
- using `inference-sdk` package (`pip install inference-sdk`) and [`InferenceHTTPClient`](/inference_helpers/inference_sdk/)
- creating custom code to make HTTP requests (see [API Reference](/api/))

## Supported CLIP versions
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Below you can find list of extras available for `inference` and `inference-gpu`
</tr>
<tr>
<td><code>yolo-world</code></td>
<td><a href="/foundation/yolo-world">Yolo-World model</a></td>
<td><a href="/foundation/yolo_world/">Yolo-World model</a></td>
<td><code>N/A</code></td>
</tr>
</table>
Expand Down
2 changes: 1 addition & 1 deletion docs/inference_helpers/inference_sdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,7 @@ CLIENT.unload_model(model_id="some/1")
```

Sometimes (to avoid OOM at server side) - unloading model will be required.
[test_postprocessing.py](..%2F..%2Ftests%2Finference_client%2Funit_tests%2Fhttp%2Futils%2Ftest_postprocessing.py)


!!! tip

Expand Down
6 changes: 3 additions & 3 deletions docs/models/from_local_weights.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
You can upload [supported weights](/models/supported_models/) to Roboflow and deploy them to your device.
You can upload [supported weights](#supported_models) to Roboflow and deploy them to your device.

This is ideal if you have already trained a model outside of Roboflow that you want to deploy with Inference.

Expand Down Expand Up @@ -33,7 +33,7 @@ version = project.version(1)
version.deploy("model-type", "path/to/training/results/")
```

The following model types are supported:
<a name="supported_models">The following model types are supported:</a>

|Model Architecture|Task |Model Type ID |
|------------------|----------------|-------------------|
Expand All @@ -56,7 +56,7 @@ The following model types are supported:

In the code above, replace:

1. `your-project-id` with the ID of your project. [Learn how to retrieve your Roboflow project ID](/docs/projects/where_is_my_project_id/).
1. `your-project-id` with the ID of your project. [Learn how to retrieve your Roboflow project ID](https://docs.roboflow.com/api-reference/workspace-and-project-ids).
2. `1` with the version number of your project.
3. `model-type` with the model type you want to deploy.
4. `path/to/training/results/` with the path to the weights you want to upload. This path will vary depending on what model architecture you are using.
Expand Down
2 changes: 1 addition & 1 deletion docs/server_configuration/environmental_variables.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Environmental vaiables

`Inference` behavior can be controlled by set of environmental variables. All environmental variables are listed in [inference/core/env.py](inference/core/env.py)
`Inference` behavior can be controlled by set of environmental variables. All environmental variables are listed in [inference/core/env.py](https://github.com/roboflow/inference/blob/main/inference/core/env.py)

Below is a list of some environmental values that require more in-depth explanation.

Expand Down
2 changes: 1 addition & 1 deletion docs/using_inference/inference_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@ of frames in tiles mosaic.

!!! Info

See our [tutorial on creating a custom Inference Pipeline sink!](/quickstart/create_a_custom_inference_pipeline_sink/)
See our [tutorial on creating a custom Inference Pipeline sink!](#custom-sinks)

**prediction**

Expand Down
2 changes: 1 addition & 1 deletion docs/using_inference/native_python_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The `get_model` method is a utility function which will help us load a computer

!!! Hint

You can find your models project name and version number <a href="https://docs.roboflow.com/api-reference/workspace-and-project-ids" target="_blank">in the Roboflow App</a>. You can also browse public models that are ready to use on <a href="https://universe.roboflow.com/" target="_blank">Roboflow Universe</a>. In this example, we are using a special model ID that is an alias of <a href="https://universe.roboflow.com/microsoft/coco/model/13" target="_blank">a COCO pretrained model on Roboflow Universe</a>. You can see the list of model aliases [here](../../reference_pages/model_aliases).
You can find your models project name and version number <a href="https://docs.roboflow.com/api-reference/workspace-and-project-ids" target="_blank">in the Roboflow App</a>. You can also browse public models that are ready to use on <a href="https://universe.roboflow.com/" target="_blank">Roboflow Universe</a>. In this example, we are using a special model ID that is an alias of <a href="https://universe.roboflow.com/microsoft/coco/model/13" target="_blank">a COCO pretrained model on Roboflow Universe</a>. You can see the list of model aliases [here](/quickstart/aliases/#supported-pre-trained-models).

Next, we can run inference with our model by providing an input image:

Expand Down
2 changes: 1 addition & 1 deletion docs/workflows/create_workflow_block.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ we will be creating SIMD block.
If you look deeper into codebase, you will discover those are type aliases - telling `pydantic`
to expect string matching `$inputs.{name}` and `$steps.{name}.*` patterns respectively, additionally providing
extra schema field metadata that tells Workflows ecosystem components that the `kind` of data behind selector is
[image](/workflows/kinds/batch_image/).
[image](/workflows/kinds/image/).

* denoting `pydantic` `Field(...)` attribute in the last parts of line `17` is optional, yet appreciated,
especially for blocks intended to cooperate with Workflows UI
Expand Down
2 changes: 1 addition & 1 deletion docs/workflows/custom_python_code_blocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ When the syntax for Workflow definitions was [outlined](/workflows/definitions/)
aspect was not covered: the ability to define blocks directly within the Workflow definition itself. This section can
include the manifest and Python code for blocks defined in-place, which are dynamically interpreted by the
Execution Engine. These in-place blocks function similarly to those statically defined in
[plugins](/workflows/workflows_bundling/), yet provide much more flexibility.
[plugins](/workflows/blocks_bundling/), yet provide much more flexibility.


!!! Warning
Expand Down
2 changes: 1 addition & 1 deletion docs/workflows/workflows_execution_engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ As the definition suggests, a SIMD (Single Instruction, Multiple Data) step proc
same operation is applied to each data point, potentially using non-batch-oriented parameters for configuration.
The output from such a step is expected to be a batch of elements, preserving the order of the input batch elements.
This applies to both regular processing steps and flow-control steps (see
[blocks development guide](/workflows/create_workflow_block/ for more on their nature), where flow-control decisions
[blocks development guide](/workflows/create_workflow_block/)), where flow-control decisions
affect each batch element individually.

In essence, the type of data fed into the step determines whether it's SIMD or non-SIMD. If a step requests any
Expand Down
16 changes: 8 additions & 8 deletions inference/core/interfaces/stream/inference_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,10 +157,10 @@ def init(
without re-raising. Default: None.
source_buffer_filling_strategy (Optional[BufferFillingStrategy]): Parameter dictating strategy for
video stream decoding behaviour. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
source_buffer_consumption_strategy (Optional[BufferConsumptionStrategy]): Parameter dictating strategy for
video stream frames consumption. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
class_agnostic_nms (Optional[bool]): Parameter of model post-processing. If not given - value checked in
env variable "CLASS_AGNOSTIC_NMS" with default "False"
confidence (Optional[float]): Parameter of model post-processing. If not given - value checked in
Expand Down Expand Up @@ -333,10 +333,10 @@ def init_with_yolo_world(
without re-raising. Default: None.
source_buffer_filling_strategy (Optional[BufferFillingStrategy]): Parameter dictating strategy for
video stream decoding behaviour. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
source_buffer_consumption_strategy (Optional[BufferConsumptionStrategy]): Parameter dictating strategy for
video stream frames consumption. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
class_agnostic_nms (Optional[bool]): Parameter of model post-processing. If not given - value checked in
env variable "CLASS_AGNOSTIC_NMS" with default "False"
confidence (Optional[float]): Parameter of model post-processing. If not given - value checked in
Expand Down Expand Up @@ -483,10 +483,10 @@ def init_with_workflow(
without re-raising. Default: None.
source_buffer_filling_strategy (Optional[BufferFillingStrategy]): Parameter dictating strategy for
video stream decoding behaviour. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
source_buffer_consumption_strategy (Optional[BufferConsumptionStrategy]): Parameter dictating strategy for
video stream frames consumption. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
video_source_properties (Optional[dict[str, float]]): Optional source properties to set up the video source,
corresponding to cv2 VideoCapture properties cv2.CAP_PROP_*. If not given, defaults for the video source
will be used.
Expand Down Expand Up @@ -651,10 +651,10 @@ def init_with_custom_logic(
without re-raising. Default: None.
source_buffer_filling_strategy (Optional[BufferFillingStrategy]): Parameter dictating strategy for
video stream decoding behaviour. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
source_buffer_consumption_strategy (Optional[BufferConsumptionStrategy]): Parameter dictating strategy for
video stream frames consumption. By default - tweaked to the type of source given.
Please find detailed explanation in docs of [`VideoSource`](../camera/video_source.py)
Please find detailed explanation in docs of [`VideoSource`](/docs/reference/inference/core/interfaces/camera/video_source/#inference.core.interfaces.camera.video_source.VideoSource)
video_source_properties (Optional[Union[Dict[str, float], List[Optional[Dict[str, float]]]]]):
Optional source properties to set up the video source, corresponding to cv2 VideoCapture properties
cv2.CAP_PROP_*. If not given, defaults for the video source will be used.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,14 @@
import pytest
from pydantic import ValidationError

from inference.core.workflows.execution_engine.entities.base import (
ImageParentMetadata,
WorkflowImageData,
)

from inference.core.workflows.core_steps.classical_cv.sift_comparison.v2 import (
SIFTComparisonBlockManifest,
SIFTComparisonBlockV2,
)
from inference.core.workflows.execution_engine.entities.base import (
ImageParentMetadata,
WorkflowImageData,
)


def test_sift_comparison_validation_when_valid_manifest_is_given() -> None:
Expand All @@ -24,7 +23,7 @@ def test_sift_comparison_validation_when_valid_manifest_is_given() -> None:
"good_matches_threshold": 50,
"ratio_threshold": 0.7,
"matcher": "FlannBasedMatcher",
"visualize": True
"visualize": True,
}

# when
Expand All @@ -39,7 +38,7 @@ def test_sift_comparison_validation_when_valid_manifest_is_given() -> None:
good_matches_threshold=50,
ratio_threshold=0.7,
matcher="FlannBasedMatcher",
visualize=True
visualize=True,
)


Expand Down Expand Up @@ -72,7 +71,7 @@ def test_sift_comparison_block_with_descriptors(dogs_image: np.ndarray) -> None:
input_2=descriptor_2,
good_matches_threshold=50,
ratio_threshold=0.7,
visualize=False
visualize=False,
)

# then
Expand All @@ -82,10 +81,11 @@ def test_sift_comparison_block_with_descriptors(dogs_image: np.ndarray) -> None:
assert output["visualization_2"] is None
assert output["visualization_matches"] is None


def test_sift_comparison_block_with_images(dogs_image: np.ndarray) -> None:
# given
block = SIFTComparisonBlockV2()

# when
output = block.run(
input_1=WorkflowImageData(
Expand All @@ -98,7 +98,7 @@ def test_sift_comparison_block_with_images(dogs_image: np.ndarray) -> None:
),
good_matches_threshold=50,
ratio_threshold=0.7,
visualize=False
visualize=False,
)

# then
Expand All @@ -108,10 +108,11 @@ def test_sift_comparison_block_with_images(dogs_image: np.ndarray) -> None:
assert output["visualization_2"] is None
assert output["visualization_matches"] is None


def test_sift_comparison_block_with_visualization(dogs_image: np.ndarray) -> None:
# given
block = SIFTComparisonBlockV2()

# when
output = block.run(
input_1=WorkflowImageData(
Expand All @@ -124,7 +125,7 @@ def test_sift_comparison_block_with_visualization(dogs_image: np.ndarray) -> Non
),
good_matches_threshold=50,
ratio_threshold=0.7,
visualize=True
visualize=True,
)

# then
Expand All @@ -133,4 +134,3 @@ def test_sift_comparison_block_with_visualization(dogs_image: np.ndarray) -> Non
assert output["visualization_1"] is not None
assert output["visualization_2"] is not None
assert output["visualization_matches"] is not None

0 comments on commit 5506f4b

Please sign in to comment.