Skip to content

Commit

Permalink
Crowdin translations (translation-batch-1606244448) (github#16615)
Browse files Browse the repository at this point in the history
* New Crowdin translations by Github Action

* Translation reverts

* Keep pt-BR as main

* Revert files to english

Co-authored-by: Crowdin Bot <support+bot@crowdin.com>
Co-authored-by: Chiedo <chiedo@users.noreply.github.com>
Co-authored-by: Jason Etcovitch <jasonetco@github.com>
  • Loading branch information
4 people authored Nov 25, 2020
1 parent 6c0942d commit 9d9a694
Show file tree
Hide file tree
Showing 556 changed files with 7,599 additions and 3,941 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,27 @@ All organizations have a single default self-hosted runner group. Organizations

Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can move a runner from the default group to any group you create.

When creating a group, you must choose a policy that defines which repositories have access to the runner group. You can configure a runner group to be accessible to a specific list of repositories, all private repositories, or all repositories in the organization.
When creating a group, you must choose a policy that defines which repositories have access to the runner group.

{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.org_settings %}
{% data reusables.organizations.settings-sidebar-actions %}
1. In the **Self-hosted runners** section, click **Add new**, and then **New group**.

![Add runner group](/assets/images/help/settings/actions-org-add-runner-group.png)
1. Enter a name for your runner group, and select an access policy from the **Repository access** dropdown list.
1. Enter a name for your runner group, and assign a policy for repository access.

![Add runner group options](/assets/images/help/settings/actions-org-add-runner-group-options.png)
{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %} You can configure a runner group to be accessible to a specific list of repositories, or to all repositories in the organization. By default, public repositories can't access runners in a runner group, but you can use the **Allow public repositories** option to override this.{% else if currentVersion == "enterprise-server@2.22"%}You can configure a runner group to be accessible to a specific list of repositories, all private repositories, or all repositories in the organization.{% endif %}

{% warning %}

**Warnung**
{% indented_data_reference site.data.reusables.github-actions.self-hosted-runner-security spaces=3 %}
Weitere Informationen findest Du unter „[Informationen zu selbst-gehosteten Runnern](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)“.

{% endwarning %}

![Add runner group options](/assets/images/help/settings/actions-org-add-runner-group-options.png)
1. Click **Save group** to create the group and apply the policy.

### Creating a self-hosted runner group for an enterprise
Expand All @@ -52,7 +62,7 @@ Enterprises can add their self-hosted runners to groups for access management. E

Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can assign the runner to a specific group during the registration process, or you can later move the runner from the default group to a custom group.

When creating a group, you must choose a policy that grants access to all organizations in the enterprise or choose specific organizations.
When creating a group, you must choose a policy that defines which organizations have access to the runner group.

{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.policies-tab %}
Expand All @@ -61,7 +71,17 @@ When creating a group, you must choose a policy that grants access to all organi
1. Click **Add new**, and then **New group**.

![Add runner group](/assets/images/help/settings/actions-enterprise-account-add-runner-group.png)
1. Enter a name for your runner group, and select an access policy from the **Organization access** dropdown list.
1. Enter a name for your runner group, and assign a policy for organization access.

{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %} You can configure a runner group to be accessible to a specific list of organizations, or all organizations in the enterprise. By default, public repositories can't access runners in a runner group, but you can use the **Allow public repositories** option to override this.{% else if currentVersion == "enterprise-server@2.22"%}You can configure a runner group to be accessible to all organizations in the enterprise or choose specific organizations.{% endif %}

{% warning %}

**Warnung**
{% indented_data_reference site.data.reusables.github-actions.self-hosted-runner-security spaces=3 %}
Weitere Informationen findest Du unter „[Informationen zu selbst-gehosteten Runnern](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)“.

{% endwarning %}

![Add runner group options](/assets/images/help/settings/actions-enterprise-account-add-runner-group-options.png)
1. Click **Save group** to create the group and apply the policy.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -572,6 +572,8 @@ on:

{% data reusables.developer-site.pull_request_forked_repos_link %}

{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %}

#### `pull_request_target`

This event is similar to `pull_request`, except that it runs in the context of the base repository of the pull request, rather than in the merge commit. This means that you can more safely make your secrets available to the workflows triggered by the pull request, because only workflows defined in the commit on the base repository are run. For example, this event allows you to create workflows that label and comment on pull requests, based on the contents of the event payload.
Expand All @@ -589,6 +591,8 @@ on: pull_request_target
types: [assigned, opened, synchronize, reopened]
```

{% endif %}

#### `Push`

{% note %}
Expand Down Expand Up @@ -689,6 +693,8 @@ on:
types: [started]
```

{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %}

#### `workflow_run`

{% data reusables.webhooks.workflow_run_desc %}
Expand All @@ -711,6 +717,8 @@ on:
- requested
```

{% endif %}

### Neue Workflows mit einem persönlichen Zugangs-Token auslösen

{% data reusables.github-actions.actions-do-not-trigger-workflows %} weitere Informationen findest Du unter „[Authentifizierung mit dem GITHUB_TOKEN](/actions/configuring-and-managing-workflows/authenticating-with-the-github_token)“.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -876,9 +876,40 @@ strategy:

{% endnote %}

##### Using environment variables in a matrix

You can add custom environment variables for each test combination by using `include` with `env`. You can then refer to the custom environment variables in a later step.

In this example, the matrix entries for `node-version` are each configured to use different values for the `site` and `datacenter` environment variables. The `Echo site details` step then uses {% raw %}`env: ${{ matrix.env }}`{% endraw %} to refer to the custom variables:

{% raw %}
```yaml
name: Node.js CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- node-version: 10.x
site: "prod"
datacenter: "site-a"
- node-version: 12.x
site: "dev"
datacenter: "site-b"
steps:
- name: Echo site details
env:
SITE: ${{ matrix.site }}
DATACENTER: ${{ matrix.datacenter }}
run: echo $SITE $DATACENTER
```
{% endraw %}

### **`jobs.<job_id>.strategy.fail-fast`**

Wenn diese Option auf `true` gesetzt ist, bricht {% data variables.product.prodname_dotcom %} alle laufenden Aufträge ab, sobald ein `matrix`-Auftrag fehlschlägt. Standard: `true`
Wenn diese Option auf `true` gesetzt ist, bricht {% data variables.product.prodname_dotcom %} alle laufenden Jobs ab, sobald ein Job der `matrix` fehlschlägt. Standard: `true`

### **`jobs.<job_id>.strategy.max-parallel`**

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ Dadurch können Sie den UUID Ihres Knotens in `cluster.conf` ermitteln.
Allows you to exempt a list of users from API rate limits. For more information, see "[Rate Limiting](/enterprise/{{ page.version }}/v3/#rate-limiting)."

``` shell
$ ghe-config app.github.rate_limiting_exempt_users "<em>hubot</em> <em>github-actions</em>"
$ ghe-config app.github.rate-limiting-exempt-users "<em>hubot</em> <em>github-actions</em>"
# Exempts the users hubot and github-actions from rate limits
```
{% endif %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
mysql-master = <em>HOSTNAME</em>
redis-master = <em>HOSTNAME</em>
<strong>primary-datacenter = default</strong>
```
```

- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.

4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
datacenter = default
```
datacenter = default
```
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}

```shell
[cluster "<em>HOSTNAME</em>"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
```shell
[cluster "<em>HOSTNAME</em>"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
...
...
...
```
```

{% note %}
{% note %}

**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.

consul-datacenter = primary
```
consul-datacenter = primary
```

{% endnote %}
{% endnote %}

{% data reusables.enterprise_clustering.apply-configuration %}

Expand All @@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio

1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."

{% note %}
{% note %}

**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
{% endnote %}
{% endnote %}
{% data reusables.enterprise_clustering.ssh-to-a-node %}
3. Back up your existing cluster configuration.
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
git config -f ~/cluster-passive.conf --remove-section cluster
```
git config -f ~/cluster-passive.conf --remove-section cluster
```
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
```shell
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
```
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
```
7. Decide on a pattern for the passive nodes' hostnames.

Expand All @@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.

```shell
sudo vim ~/cluster-passive.conf
sudo vim ~/cluster-passive.conf
```

9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
Expand All @@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
- Add a new key-value pair, `replica = enabled`.
```shell
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
<strong>replica = enabled</strong>
...
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
<strong>replica = enabled</strong>
...
...
```
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
```shell
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
```
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
```
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
```
{% warning %}
{% warning %}
**Warning**: Review your cluster configuration file before proceeding.
**Warning**: Review your cluster configuration file before proceeding.
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
- In each section for an active node named `[cluster "<em>ACTIVE NODE HOSTNAME</em>"]`, double-check the following key-value pairs.
Expand All @@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
- `replica` should be configured as `enabled`.
- Take the opportunity to remove sections for offline nodes that are no longer in use.

To review an example configuration, see "[Example configuration](#example-configuration)."
To review an example configuration, see "[Example configuration](#example-configuration)."

{% endwarning %}
{% endwarning %}

13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}

Expand All @@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.

```shell
Finished cluster initialization
Finished cluster initialization
```

{% data reusables.enterprise_clustering.apply-configuration %}
Expand Down Expand Up @@ -293,20 +303,28 @@ Initial replication between the active and passive nodes in your cluster takes t
You can monitor the progress on any node in the cluster, using command-line tools available via the {% data variables.product.prodname_ghe_server %} administrative shell. For more information about the administrative shell, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)."
- Monitor replication of databases:
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
- Monitor replication of repository and Gist data:
ghe-spokes status
```
ghe-spokes status
```
- Monitor replication of attachment and LFS data:
ghe-storage replication-status
```
ghe-storage replication-status
```
- Monitor replication of Pages data:
ghe-dpages replication-status
```
ghe-dpages replication-status
```
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ Wenn sich mehr Benutzer {% data variables.product.product_location %} anschließ

{% endnote %}

#### Minimum requirements

{% data reusables.enterprise_installation.hardware-rec-table %}

### Größe der Datenpartition erhöhen
Expand Down
Loading

0 comments on commit 9d9a694

Please sign in to comment.