Skip to content

[docs] update v4.0 markdown docs for Raptor/Decide releases #298

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 16 additions & 9 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,15 @@ branches:
- /^v\d+\.\d+(\.\d+)?(-\S*)?$/

stages:
- 'Readme-sync-preview'
- 'Readme-sync'
- 'Tests'
- 'Trigger FSC Tests'
- 'Test Build using latest tag (no upload)'
- 'Build, Upload and Publish (draft)'
- 'Test github release assets'
- 'Publish (real)'
- 'Readme-sync-preview'
- 'Readme-sync'


jobs:

Expand Down Expand Up @@ -240,15 +241,19 @@ jobs:
# we need to be in $TRAVIS_BUILD_DIR in order to run the following git diff properly
- cd $TRAVIS_BUILD_DIR

#print which docs changed in this Pull Request (and which therefore we expect to be updated by readme-sync-2 tool):
# print which docs changed in this Pull Request (and which therefore we expect to be updated by readme-sync-2 tool):
- CHANGED_DOCS_FILES=($(git diff --name-only $TRAVIS_COMMIT_RANGE -- docs/readme-sync))
- echo $CHANGED_DOCS_FILES

#only if changes were made in the docs/readme-sync repo, trigger this readme-sync stage and sync the docs
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY_PREVIEW --version 4.0 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/)

# only if changes were made in the docs/readme-sync repo, trigger this readme-sync stage and sync the docs
# to staging readme project at https://rollouts-sandbox-doc-test.readme.io/docs


# build v3.1 docs to v 1.0 of readme staging project
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync/v3.1 || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY_PREVIEW --version 1.0 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/v3.1)
# build v4.0 docs to v 1.5 of staging project
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync/v4.0 || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY_PREVIEW --version 1.5 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/v4.0)

- stage: 'Readme-sync'
before_script: skip
cache: false
Expand All @@ -272,9 +277,11 @@ jobs:
- CHANGED_DOCS_FILES=($(git diff --name-only $TRAVIS_COMMIT_RANGE -- docs/readme-sync))
- echo $CHANGED_DOCS_FILES

#only if changes were made in the docs/readme-sync repo, trigger this readme-sync stage and sync the docs
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY --version 3.1 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/)

#only if changes were made in the docs/readme-sync repo, trigger this readme-sync stage and sync the docs
# sync v3.1 docs folder to readme project v 3.1
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync/v3.1 || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY --version 3.1 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/v3.1
# sync v4.0 docs folder to readme project v 4.0
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync/v4.0 || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY --version 4.0 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/v4.0
#########################################################################################
# directories/scripts for full SDK-reference-guides, to be implemented after agent docs sync
#########################################################################################
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,4 +85,4 @@ resp = s.post(url = 'http://localhost:8080/v1/activate', params=params, json=pay
print(resp.json())
```

The activate API is a POST to signal to the caller that there are side-effects. Namely, activation results in a "decision" event sent to Optimizely analytics for the purpose of analyzing Feature Test results. A "decision" will NOT be sent if the feature is simply part of a rollout.
The activate API is a POST to signal to the caller that there are side-effects. Namely, activation results in a "decision" event sent to Optimizely analytics for the purpose of analyzing Feature Test results. A "decision" will NOT be sent if the feature is simply part of a rollout.
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ for key in env['featuresMap']:
print(key)
```

## Activate Feature
The `/v1/activate?featureKey={key}` endpoint activates the feature for a given user. In Optimizely, activation is in the context of a given user to make the relative bucketing decision. In this case we'll provide a `userId` via the request body. The `userId` will be used to determine how the feature will be evaluated. Features can either be part of a Feature Test in which variations of feature variables are being measured against one another or a feature rollout, which progressively make the feature available to the selected audience.
## Run flag rules
The `POST /v1/decide?keys={flagKey}` endpoint activates the feature for a given user. In Optimizely, activation is in the context of a given user to make the relative bucketing decision. In this case we'll provide a `userId` via the request body. The `userId` will be used to determine how the feature will be evaluated. Features can either be part of a Feature Test in which variations of feature variables are being measured against one another or a feature rollout, which progressively make the feature available to the selected audience.

From an API standpoint the presence of a Feature Test or Rollout is abstracted away from the response and only the resulting variation or enabled feature is returned.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,4 +96,4 @@ There is no response body for successful conversion event requests.

### API reference

For more details on Optimizely Agent’s APIs, see the [complete API Reference](https://library.optimizely.com/docs/api/agent/v1/index.html).
For more details on Optimizely Agent’s APIs, see the [complete API Reference](https://library.optimizely.com/docs/api/agent/v1/index.html).
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ hidden: false
createdAt: "2020-02-21T17:44:52.492Z"
updatedAt: "2020-02-21T17:44:52.492Z"
type: "link"
link_url: "https://optimizely.github.io/docs/api/agent/"
link_url: "https://library.optimizely.com/docs/api/agent/v1/index.html"
link_external: true
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: "CAUTION - DON'T AUTHOR AGENT DOCS IN README"
excerpt: ""
slug: "author-agent-docs-in-github"
hidden: true
createdAt: "2020-02-21T17:44:53.019Z"
updatedAt: "2020-04-13T23:02:34.056Z"
---

Agent docs sync from Github. So, any changes you author here in Readme will just be overwritten. Make docs updates over in Github. For more info, see [authoring guidelines in github](https://github.com/optimizely/agent/blob/master/docs/internal%20docs%20authoring%20notes.md).
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: "Optimizely Agent"
excerpt: ""
slug: "optimizely-agent"
hidden: false
metadata:
title: "Optimizely Agent microservice - Optimizely Full Stack"
createdAt: "2020-02-21T20:35:58.387Z"
updatedAt: "2020-07-14T20:51:52.458Z"
---
Optimizely Agent is a standalone, open-source, and highly available microservice that provides major benefits over using Optimizely SDKs in certain use cases. The [Agent REST API](https://library.optimizely.com/docs/api/agent/v1/index.html) offers consolidated and simplified endpoints for accessing all the functionality of Optimizely Full Stack SDKs.

A typical production installation of Optimizely Agent is to run two or more services behind a load balancer or proxy. The service itself can be run via a Docker container or installed from source. See [Setup Optimizely Agent](doc:setup-optimizely-agent) for instructions on how to run Optimizely Agent.

### Example Implementation
![example implementation](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-example-implementation.png)
# Should I Use Optimizely Agent?

Here are some of the top reasons to consider using Optimizely Agent:

## 1. Service Oriented Architecture (SOA)
If you already separate some of your logic into services that might need to access the Optimizely decision APIs, we recommend using Optimizely Agent.

The images below compare implementation styles in a service-oriented architecture, first *without* using Optimizely Agent, which shows six SDK embedded instances:

!["A diagram showing the use of SDKs installed on each service in a service oriented architecture \n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-service-oriented-architecture.png)

Now *with* Agent, instead of installing the SDK six times, you create just one Optimizely instance: an HTTP API that every service can access as needed.

!["A diagram showing the use of Optimizely Agent in a single service \n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-single-service.png)

## 2. Standardize Access Across Teams
If you want to deploy Optimizely Full Stack once, then roll out the single implementation across a large number of teams, we recommend using Optimizely Agent.

By standardizing your teams' access to the Optimizely service, you can better enforce processes and implement governance around feature management and experimentation as a practice.

!["A diagram showing the central and standardized access to the Optimizely Agent service across an arbitrary number of teams.\n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-standardized-access.png)

## 3. Networking Centralization
You don’t want many SDK instances connecting to Optimizely's cloud service from every node in your application. Optimizely Agent centralizes your network connection. Only one cluster of agent instances connects to Optimizely for tasks like update [datafiles](doc:get-the-datafile) and dispatch [events](doc:track-events).

## 4. Languages
You’re using a language that isn’t supported by a native SDK (i.e. Elixir, Scala, Perl). While its possible to create your own service using an Optimizely SDK of your choice, you could also customize the open-source Optimizely Agent to your needs without building the service layer on your own.


# Reasons to *not* use Optimizely Agent
If your use case wouldn't benefit greatly from Optimizely Agent, you should consider the below reasons to *not* use Optimizely Agent and review Optimizely's many [open-source SDKs](doc:sdk-reference-guides) instead.

## 1. Latency
If time to provide bucketing decisions is a primary concern for you, you may want to use an embedded Full Stack SDK rather than Optimizely Agent.


| Implementation Option | Decision Latency |
|-----------------------|------------------|
| Embedded SDK | microseconds |
| Optimizely Agent | milliseconds |

## 2. Monolith
If your app is constructed as a monolith, embedded SDKs might be easier to install and might be a more natural fit for your application and development practices.

## 3. Velocity
If you’re looking for the fastest way to get a single team up and running with deploying feature management and experimentation, embedding an SDK is the best option for you at first. You can always start using Optimizely Agent later, and it can even be used alongside Optimizely Full Stack SDKs running in another part of your stack.

# Best Practices
While every implementation is different, you can review this section of best practices for tips on these commonly discussed topics.


## How many Agent instances should I deploy?
Agent can scale to large decision / event tracking volumes with relatively low CPU / memory specs. For example, at Optimizely, we scaled our deployment to 740 clients with a cluster of 12 agent instances, which in total use 6 vCPUs and 12GB RAM. You will likely need to focus more on network bandwidth than compute power.

## Using a load balancer
Any standard load balancer should let you route traffic across your agent cluster. At Optimizely, we used an AWS Elastic Load Balancer (ELB) for our internal deployment. This allows us to transparently scale our agent cluster as internal demands increase.

## Synchronizing datafiles across Agent instances
Agent offers eventual rather than strong consistency across datafiles.
In detail, today, each agent instance maintains a dedicated, separate cache. Each agent instance persists an SDK instance for each SDK key your team uses. Agent instances automatically keep datafiles up to date for each SDK key instance so that you will have eventual consistency across the cluster. The rate of the datafile update can be [set as the configuration](doc:configure-optimizely-agent) value ```OPTIMIZELY_CLIENT_POLLINGINTERVAL``` (the default is 1 minute).
Because SDKs are generally stateless today, they shouldn’t need to share data. We plan to add a common backing data store, so we invite you to share your feedback.
If you require strong consistency across datafiles, then we recommend an active / passive deployment where all requests are made to a single vertically scaled host, with a passive, standby cluster available for high availability.
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
title: "Quickstart for Agent"
excerpt: ""
slug: "quickstart-for-agent"
hidden: false
metadata:
title: "Quickstart for Agent - Optimizely Full Stack"
createdAt: "2020-05-21T20:35:58.387Z"
updatedAt: "2020-08-17T20:51:52.458Z"
---

This brief quickstart describes how to run Agent, using two examples:

- To get started using Docker, see [Running locally via Docker](https://docs.developers.optimizely.com/full-stack/docs/quickstart-with-docker#section-running-locally-via-docker).

- To get started using example Node microservices, see the following video link.



## Running locally via Node
| Resource | Description |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| [Implementing feature flags across microservices with Optimizely Agent](https://www.youtube.com/watch?v=kwNVdSXMGX8&t=20s) | 4-minute video on implementing Optimizely Agent with example microservices |

## Running locally via Docker

Follow these steps to deploy Optimizely Agent locally via Docker and access some of the common API endpoints.
If Docker is not installed then you can download it [here](https://docs.docker.com/install/).

First pull the Docker image with:

```bash
docker pull optimizely/agent
```

Then start the service in the foreground with the following command:

```bash
docker run -p 8080:8080 --env OPTIMIZELY_LOG_PRETTY=true optimizely/agent
```
Note that we're enabling "pretty" logs which provide colorized and human readable formatting.
The default log output format is structured JSON.

## Evaluating REST APIs

The rest of the getting started guide will demonstrate the APIs capabilities. For brevity, we've chosen to illustrate the API usage with Python. Note that the APIs are also defined via OpenAPI (Swagger) and can be found on localhost [here](http://localhost:8080/openapi.yaml).

### Start an http session

Each request made into Optimizely Agent is in the context of an Optimizely SDK Key. SDK Keys map API requests to a specific Optimizely Project and Environment. We can set up a global request header by using the `requests.Session` object.

```python
import requests
s = requests.Session()
s.headers.update({'X-Optimizely-SDK-Key': '<<YOUR-SDK-KEY>>'})
```

To get your SDK key, navigate to the project settings of your Optimizely account.

Future examples will assume this session is being maintained.

### Get current environment configuration

The `/config` endpoint returns a manifest of the current working environment.

```python
resp = s.get('http://localhost:8080/v1/config')
env = resp.json()

for key in env['featuresMap']:
print(key)
```

### Run a feature flag rule

The `/decide?keys={keys}` endpoint decides whether to enable a feature flag or flags for a given user. We'll provide a `userId` via the request body. The API evaluates the `userId` to determine which flag rule and flag variation the user buckets into. Rule types include A/B tests, in which flag variations are measured against one another, or a flag delivery, which progressively make the flag available to the selected audience.

This endpoint returns an array of `OptimizelyDecision` objects, which contains information about the flag and rule the user bucketed into.

```python
params = { "keys": "my-feature-flag" }
payload = { "userId": "test-user" }
resp = s.post(url = 'http://localhost:8080/v1/decide', params=params, json=payload)

print(resp.json())
```

The decide API is a POST to signal to the caller that there are side-effects. Namely, this endpoint results in a "decision" event sent to Optimizely analytics for the purpose of analyzing A/B test results. By default a "decision" is not sent if the feature flag is simply part of a delivery.
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Install Optimizely Agent"
excerpt: ""
slug: "setup-optimizely-agent"
hidden: false
metadata:
title: "Install Agent - Optimizely Full Stack"
createdAt: "2020-02-21T17:44:27.363Z"
updatedAt: "2020-03-31T23:54:17.841Z"
---
## Running Agent from source (Linux / OSX)

To develop and compile Optimizely Agent from source:

1. Install [Golang](https://golang.org/dl/) version 1.13+ .
2. Clone the [Optimizely Agent repo](https://github.com/optimizely/agent).
3. From the repo directory, open a terminal and start Optimizely Agent:

```bash
make setup
```
Then
```bash
make run
```

This starts the Optimizely Agent with the default configuration in the foreground.

## Running Agent from source (Windows)

You can use a [helper script](https://github.com/optimizely/agent/blob/master/scripts/build.ps1) to install prerequisites (Golang, Git) and compile agent in a Windows environment. Take these steps:

1. Clone the [Optimizely Agent repo](https://github.com/optimizely/agent)
2. From the repo directory, open a Powershell terminal and run

```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

.\scripts\build.ps1

.\bin\optimizely.exe
```

## Running Agent via Docker

If you have Docker installed, you can start Optimizely Agent as a container. Take these steps:

1. Pull the Docker image:

```bash
docker pull optimizely/agent
```
By default this will pull the "latest" tag. You can also specify a specific version of Agent by providing the version as a tag to the docker command:

```bash
docker pull optimizely/agent:X.Y.Z
```

2. Run the docker container with:

```bash
docker run -p 8080:8080 optimizely/agent
```
This will start Agent in the foreground and expose the container API port 8080 to the host.

3. (Optional) You can alter the configuration by passing in environment variables to the preceding command, without having to create a config.yaml file. See [configure optimizely agent](doc:configure-optimizely-agent) for more options.

Versioning:
When a new version is released, 2 images are pushed to dockerhub. They are distinguished by their tags:
- :latest (same as :X.Y.Z)
- :alpine (same as :X.Y.Z-alpine)

The difference between latest and alpine is that latest is built `FROM scratch` while alpine is `FROM alpine`.
- [latest Dockerfile](https://github.com/optimizely/agent/blob/master/scripts/dockerfiles/Dockerfile.static)
- [alpine Dockerfile](https://github.com/optimizely/agent/blob/master/scripts/dockerfiles/Dockerfile.alpine)
Loading