Skip to content

Docs readme sync 3 #251

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jul 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ stages:
- 'Build, Upload and Publish (draft)'
- 'Test github release assets'
- 'Publish (real)'
- 'Readme-sync'

jobs:

Expand Down Expand Up @@ -217,6 +218,55 @@ jobs:
# how to use hub: https://hub.github.com/hub.1.html
- hub release edit --draft=false -m "" ${TRAVIS_TAG}

- stage: 'Readme-sync'
before_script: skip
cache: false
# translation: if we're merging into a master branch...
if: type = push AND branch = master

language: node_js
install:


# make dir $HOME/readme-sync2 & clone readme-sync2 repo to it; install dependencies

- mkdir $HOME/readme-sync2 && pushd $HOME/readme-sync2 && git init && git pull https://$CI_USER_TOKEN@github.com/optimizely/readme-sync2.git && popd
- source ~/.nvm/nvm.sh && cd $HOME/readme-sync2 && nvm install && npm install

script:
# we need to be in $TRAVIS_BUILD_DIR in order to run the following git diff properly
- cd $TRAVIS_BUILD_DIR

#print which docs changed in this Pull Request (and which therefore we expect to be updated by readme-sync-2 tool):
- CHANGED_DOCS_FILES=($(git diff --name-only $TRAVIS_COMMIT_RANGE -- docs/readme-sync))
- echo $CHANGED_DOCS_FILES

#only if changes were made in the docs/readme-sync repo, trigger this readme-sync stage and sync the docs
- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY --version 3.1 --docs $TRAVIS_BUILD_DIR/docs/readme-sync/)

#########################################################################################
# directories/scripts for full SDK-reference-guides, to be implemented after agent docs sync
#########################################################################################
## this preps the input directory for readme-sync script
#- mkdir -p $HOME/readme-sync2/docs/readme-sync/sdk-reference-guides
## ${TRAVIS_REPO_SLUG#optimizely/} translates to go-sdk docs/readme-sync/sdk-reference-guides/go-sdk
#- ln -s $TRAVIS_BUILD_DIR/docs/readme-sync/sdk-reference-guides/${TRAVIS_REPO_SLUG#optimizely/} $HOME/readme-sync2/docs/readme-sync/sdk-reference-guides/${TRAVIS_REPO_SLUG#optimizely/}

## now we need to get all the other *-sdk repos too
##
## first we list all possible sdks and inside the for loop, remove the one we are updating
#- export ALL_SDK_REPOS="android-sdk csharp-sdk go-sdk java-sdk javascript-sdk objective-c-sdk python-sdk react-sdk ruby-sdk swift-sdk"
#- mkdir $HOME/sdks && pushd $HOME/sdks && for i in ${ALL_SDK_REPOS//${TRAVIS_REPO_SLUG#optimizely/}}; do git clone https://github.com/optimizely/$i; ( [ -d "$HOME/sdks/$i/docs/readme-sync/sdk-reference-guides/$i" ] && ln -s $HOME/sdks/$i/docs/readme-sync/sdk-reference-guides/$i $HOME/readme-sync2/docs/readme-sync/sdk-reference-guides/$i ) || true; done && popd
## check our work
#- ls -al $HOME/sdks
#- ls -al $HOME/readme-sync2/docs/readme-sync/sdk-reference-guides
#script:
## we need to be in $TRAVIS_BUILD_DIR in order to run the following git diff properly
#- cd $TRAVIS_BUILD_DIR
#- git diff --quiet $TRAVIS_COMMIT_RANGE -- docs/readme-sync || ( cd $HOME/readme-sync2 && npx ts-node sync/index.ts --apiKey $README_SYNC_API_KEY --version 4.0 --docs docs/readme-sync/ )



before_script:
# https://github.com/travis-ci/gimme
- eval "$(gimme)"
Expand Down
Binary file added docs/images/agent-example-implementation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/agent-single-service.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/agent-standardized-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: "Optimizely Agent"
excerpt: ""
slug: "optimizely-agent"
hidden: false
metadata:
title: "Optimizely Agent microservice - Optimizely Full Stack"
createdAt: "2020-02-21T20:35:58.387Z"
updatedAt: "2020-07-14T20:51:52.458Z"
---
Optimizely Agent is a standalone, open-source, and highly available microservice that provides major benefits over using Optimizely SDKs in certain use cases. The Agent [REST API](https://library.optimizely.com/docs/api/agent/v1/index.html) offers consolidated and simplified endpoints for accessing all the functionality of Optimizely Full Stack SDKs.

A typical production installation of Optimizely Agent is to run two or more services behind a load balancer or proxy. The service itself can be run via a Docker container or installed from source. See [Setup Optimizely Agent](doc:setup-optimizely-agent) for instructions on how to run Optimizely Agent.

### Example Implementation
![example implementation](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-example-implementation.png)
# Should I Use Optimizely Agent?

Here are some of the top reasons to consider using Optimizely Agent:

## 1. Service Oriented Architecture (SOA)
If you already separate some of your logic into services that might need to access the Optimizely decision APIs, we recommend using Optimizely Agent.

The images below compare implementation styles in a service-oriented architecture, first *without* using Optimizely Agent, which shows six SDK embedded instances:

!["A diagram showing the use of SDKs installed on each service in a service oriented architecture \n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-service-oriented-architecture.png)

Now *with* Agent, instead of installing the SDK six times, you create just one Optimizely instance: an HTTP API that every service can access as needed.

!["A diagram showing the use of Optimizely Agent in a single service \n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-single-service.png)

## 2. Standardize Access Across Teams
If you want to deploy Optimizely Full Stack once, then roll out the single implementation across a large number of teams, we recommend using Optimizely Agent.

By standardizing your teams' access to the Optimizely service, you can better enforce processes and implement governance around feature management and experimentation as a practice.

!["A diagram showing the central and standardized access to the Optimizely Agent service across an arbitrary number of teams.\n(Click to Enlarge)"](https://raw.githubusercontent.com/optimizely/agent/master/docs/images/agent-standardized-access.png)

## 3. Networking Centralization
You don’t want many SDK instances connecting to Optimizely's cloud service from every node in your application. Optimizely Agent centralizes your network connection. Only one cluster of agent instances connects to Optimizely for tasks like update [datafiles](doc:get-the-datafile) and dispatch [events](doc:track-events).

## 4. Languages
You’re using a language that isn’t supported by a native SDK (i.e. Elixir, Scala, Perl). While its possible to create your own service using an Optimizely SDK of your choice, you could also customize the open-source Optimizely Agent to your needs without building the service layer on your own.


# Reasons to *not* use Optimizely Agent
If your use case wouldn't benefit greatly from Optimizely Agent, you should consider the below reasons to *not* use Optimizely Agent and review Optimizely's many [open-source SDKs](doc:sdk-reference-guides) instead.

## 1. Latency
If time to provide bucketing decisions is a primary concern for you, you may want to use an embedded Full Stack SDK rather than Optimizely Agent.


| Implementation Option | Decision Latency |
|-----------------------|------------------|
| Embedded SDK | microseconds |
| Optimizely Agent | milliseconds |

## 2. Monolith
If your app is constructed as a monolith, embedded SDKs might be easier to install and might be a more natural fit for your application and development practices.

## 3. Velocity
If you’re looking for the fastest way to get a single team up and running with deploying feature management and experimentation, embedding an SDK is the best option for you at first. You can always start using Optimizely Agent later, and it can even be used alongside Optimizely Full Stack SDKs running in another part of your stack.

# Best Practices
While every implementation is different, you can review this section of best practices for tips on these commonly discussed topics.


## How many Agent instances should I deploy?
Agent can scale to large decision / event tracking volumes with relatively low CPU / memory specs. For example, at Optimizely, we scaled our deployment to 740 clients with a cluster of 12 agent instances, which in total use 6 vCPUs and 12GB RAM. You will likely need to focus more on network bandwidth than compute power.

## Using a load balancer
Any standard load balancer should let you route traffic across your agent cluster. At Optimizely, we used an AWS Elastic Load Balancer (ELB) for our internal deployment. This allows us to transparently scale our agent cluster as internal demands increase.

## Synchronizing datafiles across Agent instances
Agent offers eventual rather than strong consistency across datafiles.
In detail, today, each agent instance maintains a dedicated, separate cache. Each agent instance persists an SDK instance for each SDK key your team uses. Agent instances automatically keep datafiles up to date for each SDK key instance so that you will have eventual consistency across the cluster. The rate of the datafile update can be [set as the configuration](doc:configure-optimizely-agent) value ```OPTIMIZELY_CLIENT_POLLINGINTERVAL``` (the default is 1 minute).
Because SDKs are generally stateless today, they shouldn’t need to share data. We plan to add a common backing data store, so we invite you to share your feedback.
If you require strong consistency across datafiles, then we recommend an active / passive deployment where all requests are made to a single vertically scaled host, with a passive, standby cluster available for high availability.
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Set up Optimizely Agent"
excerpt: ""
slug: "setup-optimizely-agent"
hidden: false
metadata:
title: "Getting started with Agent - Optimizely Full Stack"
createdAt: "2020-02-21T17:44:27.363Z"
updatedAt: "2020-03-31T23:54:17.841Z"
---
## Running Agent from source (Linux / OSX)

To develop and compile Optimizely Agent from source:

1. Install [Golang](https://golang.org/dl/) version 1.13+ .
2. Clone the [Optimizely Agent repo](https://github.com/optimizely/agent).
3. From the repo directory, open a terminal and start Optimizely Agent:

```bash
make setup
```
Then
```bash
make run
```

This starts the Optimizely Agent with the default configuration in the foreground.

## Running Agent from source (Windows)

You can use a [helper script](https://github.com/optimizely/agent/blob/master/scripts/build.ps1) to install prerequisites (Golang, Git) and compile agent in a Windows environment. Take these steps:

1. Clone the [Optimizely Agent repo](https://github.com/optimizely/agent)
2. From the repo directory, open a Powershell terminal and run

```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

.\scripts\build.ps1

.\bin\optimizely.exe
```

## Running Agent via Docker

If you have Docker installed, you can start Optimizely Agent as a container. Take these steps:

1. Pull the Docker image:

```bash
docker pull optimizely/agent
```
By default this will pull the "latest" tag. You can also specify a specific version of Agent by providing the version as a tag to the docker command:

```bash
docker pull optimizely/agent:X.Y.Z
```

2. Run the docker container with:

```bash
docker run -p 8080:8080 optimizely/agent
```
This will start Agent in the foreground and expose the container API port 8080 to the host.

3. (Optional) You can alter the configuration by passing in environment variables to the preceding command, without having to create a config.yaml file. See [configure optimizely agent](doc:configure-optimizely-agent) for more options.

Versioning:
When a new version is released, 2 images are pushed to dockerhub. They are distinguished by their tags:
- :latest (same as :X.Y.Z)
- :alpine (same as :X.Y.Z-alpine)

The difference between latest and alpine is that latest is built `FROM scratch` while alpine is `FROM alpine`.
- [latest Dockerfile](https://github.com/optimizely/agent/blob/master/scripts/dockerfiles/Dockerfile.static)
- [alpine Dockerfile](https://github.com/optimizely/agent/blob/master/scripts/dockerfiles/Dockerfile.alpine)
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: "Evaluate REST APIs"
excerpt: ""
slug: "evaluate-rest-apis"
hidden: false
metadata:
title: "Evaluate REST APIs - Optimizely Full Stack"
createdAt: "2020-02-21T17:44:53.019Z"
updatedAt: "2020-04-13T23:02:34.056Z"
---
Below is an example demonstrating the APIs capabilities. For brevity, we've chosen to illustrate the API usage with Python. Note that the API documentation is defined via an OpenAPI (Swagger) spec and can be viewed [here](https://library.optimizely.com/docs/api/agent/v1/index.htm).

## Start an http session
Each request made into Optimizely Agent is in the context of an Optimizely SDK Key. SDK Keys map API requests to a specific Optimizely Project and Environment. We can setup a global request header by using the `requests.Session` object.


```python
import requests

s = requests.Session()
s.headers.update({'X-Optimizely-SDK-Key': 'YOUR-SDK-KEY'})
```
The following examples will assume this session is being maintained.

## Get current environment configuration
The `/v1/config` endpoint returns a manifest of the current working environment.

```python
resp = s.get('http://localhost:8080/v1/config')
env = resp.json()

for key in env['featuresMap']:
print(key)
```

## Activate Feature
The `/v1/activate?featureKey={key}` endpoint activates the feature for a given user. In Optimizely, activation is in the context of a given user to make the relative bucketing decision. In this case we'll provide a `userId` via the request body. The `userId` will be used to determine how the feature will be evaluated. Features can either be part of a Feature Test in which variations of feature variables are being measured against one another or a feature rollout, which progressively make the feature available to the selected audience.

From an API standpoint the presence of a Feature Test or Rollout is abstracted away from the response and only the resulting variation or enabled feature is returned.


```python
# single feature activate
params = { "featureKey": "my-feature" }
payload = { "userId": "test-user" }
resp = s.post(url = 'http://localhost:8080/v1/activate', params=params, json=payload)

print(resp.json())


# multiple (bulk) feature activate
params = {
"featureKey": [key for key in env['featuresMap']],
"experimentKey": [key for key in env['experimentsMap']]
}
resp2 = s.post(url = 'http://localhost:8080/v1/activate', params=params, json=payload)
print(json.dumps(resp.json(), indent=4, sort_keys=True))
```
The activate API is a POST to signal to the caller that there are side-effects. Namely, activation results in a "decision" event sent to Optimizely analytics for the purpose of analyzing Feature Test results. A "decision" will NOT be sent if the feature is simply part of a rollout.
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
title: "Admin API"
excerpt: ""
slug: "admin-api"
hidden: false
metadata:
title: "Admin APIs - Optimizely Full Stack"
createdAt: "2020-02-21T17:44:28.054Z"
updatedAt: "2020-02-21T23:09:19.274Z"
---
The Admin API provides system information about the running process. This can be used to check the availability of the service, runtime information and operational metrics. By default the admin listener is configured on port 8088.

## Info

The `/info` endpoint provides basic information about the Optimizely Agent instance.

Example Request:
```bash
curl localhost:8088/info
```

Example Response:
```json
{
"version": "v0.10.0",
"author": "Optimizely Inc.",
"app_name": "optimizely"
}
```

## Health Check

The `/health` endpoint is used to determine service availability.

Example Request:
```bash
curl localhost:8088/health
```

Example Response:
```json
{
"status": "ok"
}
```

Agent will return a HTTP 200 - OK response if and only if all configured listeners are open and all external dependent services can be reached.
A non-healthy service will return a HTTP 503 - Unavailable response with a descriptive message to help diagnose the issue.

This endpoint can used when placing Agent behind a load balancer to indicate whether a particular instance can receive inbound requests.

## Metrics

The `/metrics` endpoint exposes telemetry data of the running Optimizely Agent. The core runtime metrics are exposed via the go expvar package. Documentation for the various statistics can be found as part of the [mstats](https://golang.org/src/runtime/mstats.go) package.

Example Request:
```bash
curl localhost:8088/metrics
```

Example Response:
```json
{
"cmdline": [
"bin/optimizely"
],
"memstats": {
"Alloc": 924136,
"TotalAlloc": 924136,
"Sys": 71893240,
"Lookups": 0,
"Mallocs": 4726,
"HeapAlloc": 924136,
...
"Frees": 172
},
...
}
```
Custom metrics are also provided for the individual service endpoints and follow the pattern of:

```bash
"timers.<metric-name>.counts": 0,
"timers.<metric-name>.responseTime": 0,
"timers.<metric-name>.responseTimeHist.p50": 0,
"timers.<metric-name>.responseTimeHist.p90": 0,
"timers.<metric-name>.responseTimeHist.p95": 0,
"timers.<metric-name>.responseTimeHist.p99": 0,
```
Loading