Skip to content

Commit

Permalink
feat: add typos check in ci workflow and fix typo errors.
Browse files Browse the repository at this point in the history
Signed-off-by: rambohe-ch <linbo.hlb@alibaba-inc.com>
  • Loading branch information
rambohe-ch committed Jul 15, 2024
1 parent 7ec6485 commit 8c3282c
Show file tree
Hide file tree
Showing 90 changed files with 337 additions and 317 deletions.
35 changes: 23 additions & 12 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,17 @@ env:
AWS_USR: ${{ secrets.AWS_USR }}

jobs:
typos-check:
name: Spell Check with Typos
runs-on: ubuntu-22.04
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v4
- name: Check spelling with custom config file
uses: crate-ci/typos@v1.23.2
with:
config: ./typos.toml

verify:
runs-on: ubuntu-22.04
steps:
Expand Down Expand Up @@ -49,18 +60,18 @@ jobs:
skip-cache: true
mode: readonly

# markdownlint-misspell-shellcheck:
# runs-on: ubuntu-22.04
# # this image is build from Dockerfile
# # https://github.com/pouchcontainer/pouchlinter/blob/master/Dockerfile
# container: pouchcontainer/pouchlinter:v0.1.2
# steps:
# - name: Checkout
# uses: actions/checkout@v3
# - name: Run misspell
# run: find ./* -name "*" | xargs misspell -error
# - name: Lint markdown files
# run: find ./ -name "*.md" | grep -v enhancements | grep -v .github
# markdownlint-misspell-shellcheck:
# runs-on: ubuntu-22.04
# # this image is build from Dockerfile
# # https://github.com/pouchcontainer/pouchlinter/blob/master/Dockerfile
# container: pouchcontainer/pouchlinter:v0.1.2
# steps:
# - name: Checkout
# uses: actions/checkout@v3
# - name: Run misspell
# run: find ./* -name "*" | xargs misspell -error
# - name: Lint markdown files
# run: find ./ -name "*.md" | grep -v enhancements | grep -v .github
# - name: Check markdown links
# run: |
# set +e
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ verify_manifests:
verify-license:
hack/make-rules/check_license.sh

# verify-mod will check if go.mod has beed tidied.
# verify-mod will check if go.mod has been tidied.
verify-mod:
hack/make-rules/verify_mod.sh

Expand Down
2 changes: 1 addition & 1 deletion cmd/yurt-iot-dock/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ func (o *YurtIoTDockOptions) AddFlags(fs *pflag.FlagSet) {
fs.BoolVar(&o.EnableLeaderElection, "leader-elect", false, "Enable leader election for controller manager. "+"Enabling this will ensure there is only one active controller manager.")
fs.StringVar(&o.Nodepool, "nodepool", "", "The nodePool deviceController is deployed in.(just for debugging)")
fs.StringVar(&o.Namespace, "namespace", "default", "The cluster namespace for edge resources synchronization.")
fs.StringVar(&o.Version, "version", "", "The version of edge resources deploymenet.")
fs.StringVar(&o.Version, "version", "", "The version of edge resources deployment.")
fs.StringVar(&o.CoreDataAddr, "core-data-address", "edgex-core-data:59880", "The address of edge core-data service.")
fs.StringVar(&o.CoreMetadataAddr, "core-metadata-address", "edgex-core-metadata:59881", "The address of edge core-metadata service.")
fs.StringVar(&o.CoreCommandAddr, "core-command-address", "edgex-core-command:59882", "The address of edge core-command service.")
Expand Down
2 changes: 1 addition & 1 deletion cmd/yurt-tunnel-agent/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ func (o *AgentOptions) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&o.KubeConfig, "kube-config", o.KubeConfig, "Path to the kubeconfig file.")
fs.StringVar(&o.AgentIdentifiers, "agent-identifiers", o.AgentIdentifiers, "The identifiers of the agent, which will be used by the server when choosing agent.")
fs.StringVar(&o.MetaHost, "meta-host", o.MetaHost, "The ip address on which listen for --meta-port port.")
fs.StringVar(&o.MetaPort, "meta-port", o.MetaPort, "The port on which to serve HTTP requests like profling, metrics")
fs.StringVar(&o.MetaPort, "meta-port", o.MetaPort, "The port on which to serve HTTP requests like profiling, metrics")
fs.StringVar(&o.CertDir, "cert-dir", o.CertDir, "The directory of certificate stored at.")
}

Expand Down
2 changes: 1 addition & 1 deletion cmd/yurt-tunnel-server/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ func (o *ServerOptions) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&o.TunnelAgentConnectPort, "tunnel-agent-connect-port", o.TunnelAgentConnectPort, "The port on which to serve tcp packets from tunnel agent")
fs.StringVar(&o.SecurePort, "secure-port", o.SecurePort, "The port on which to serve HTTPS requests from cloud clients like prometheus")
fs.StringVar(&o.InsecurePort, "insecure-port", o.InsecurePort, "The port on which to serve HTTP requests from cloud clients like metrics-server")
fs.StringVar(&o.MetaPort, "meta-port", o.MetaPort, "The port on which to serve HTTP requests like profling, metrics")
fs.StringVar(&o.MetaPort, "meta-port", o.MetaPort, "The port on which to serve HTTP requests like profiling, metrics")
}

func (o *ServerOptions) Config() (*config.Config, error) {
Expand Down
2 changes: 1 addition & 1 deletion cmd/yurthub/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ func NewYurtHubOptions() *YurtHubOptions {
HeartbeatTimeoutSeconds: 2,
HeartbeatIntervalSeconds: 10,
MaxRequestInFlight: 250,
BootstrapMode: certificate.TokenBoostrapMode,
BootstrapMode: certificate.TokenBootstrapMode,
RootDir: filepath.Join("/var/lib/", projectinfo.GetHubName()),
EnableProfiling: true,
EnableDummyIf: true,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ On the basis of the existing NegotiatedSerializer increase a new unstructuredNeg
type unstructuredNegotiatedSerializer struct {
scheme *runtime.Scheme
typer runtime.ObjectTyper
creator runtime.ObjectCreater
creator runtime.ObjectCreator
}

// SerializerManager is responsible for managing *rest.Serializers
Expand Down Expand Up @@ -402,7 +402,7 @@ func (hl *HandlerLayer) GetSelector(gvk schema.GroupVersionKind) *storage.Select
The followings are the function definitions for object handling using selectors and handlers.

```go
//Process uses the registered handlers to process the objects. The obj passed into function shoud not be changed.
//Process uses the registered handlers to process the objects. The obj passed into function should not be changed.
func (hl *HandlerLayer) Process(obj runtime.Object) (runtime.Object, error) {
gvk := obj.GetObjectKind().GroupVersionKind()
handlers := hl.GetHandlers(gvk)
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/20210310-edge-device-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ type DeviceServiceCondition string

const (
Unavailable DeviceServiceCondition = "Unavailable"
Available DeviceServiceCondition = "Availale"
Available DeviceServiceCondition = "Available"
)

type Addressable struct {
Expand Down Expand Up @@ -403,7 +403,7 @@ The `DeviceProfile` contains one `deviceResources`, i.e., `lightcolor`, which su
"commands": [
{
"created": <created-timestamp>,
"modified": <modifed-timestamp>,
"modified": <modified-timestamp>,
"id": "<command-id>",
"name": "lightcolor",
"get": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ status: provisional
Refer to the [OpenYurt Glossary](https://github.com/openyurtio/openyurt/blob/master/docs/proposals/00_openyurt-glossary.md).

## Summary
This proposal add three subcommands `init`, `join` and `reset` for yurtctl. The subcommand `init` can create an all-in-one kubernetes cluster, simultaneously convert the kuberntes cluster to an OpenYurt cluster. The subcommand `join` is used to add a new node to an OpenYurt cluster, including cloud nodes and edge nodes. The subcommand `reset` can restore the node to the state before joining OpenYurt cluster.
This proposal add three subcommands `init`, `join` and `reset` for yurtctl. The subcommand `init` can create an all-in-one kubernetes cluster, simultaneously convert the kubernetes cluster to an OpenYurt cluster. The subcommand `join` is used to add a new node to an OpenYurt cluster, including cloud nodes and edge nodes. The subcommand `reset` can restore the node to the state before joining OpenYurt cluster.

## Motivation

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/20210722-yurtcluster-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ type YurtCluster struct {

The CRD would be enforced to have a cluster singleton CR semantics, through patched name validation for CRD definition. (for kubebuilder, under config/crd/patches)

The controller would listen incomming CR, and analyze the requirements to figure out user's intention, that is, what nodes to convert, and what nodes to revert.
The controller would listen incoming CR, and analyze the requirements to figure out user's intention, that is, what nodes to convert, and what nodes to revert.

The controller would update status to record converted, reverted, and failed nodes.

Expand Down
12 changes: 6 additions & 6 deletions docs/proposals/20220627-yurthub-cache-refactoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ When deleting a cache-agent, CacheManager should recycle the cache used by this

#### 3.1.4 the implementation of saving list objects depends on the DiskStorage implementation

As described in [#265](https://github.com/openyurtio/openyurt/pull/265), each cache-agent can only have the cache of one type of list for one resource. Considering that if we update cache using items in list object one by one, it will result in some cache objects not being deleted. Thus, in `saveListObject`, it will replace all objects under the resource directory with the items in the response of the list request. It works well when the CacheManager uses DiskStorage, because cache for different components are stored at different directory, for example, service cache for kubelet is under `/etc/kubernetes/cache/kubelet/services`, service cache for kube-proxy is under `/etc/kubernetes/cache/kube-proxy/services`. Replacing the serivce cache of kubelet has no influence on service cache of kube-proxy. But when using Yurt-Coordinator storage, services for all components are cached under `/registry/services`, if replacing all the entries under `/registry/services` with items in the response of list request from kubelet, the service cache for kube-proxy will be overwritten.
As described in [#265](https://github.com/openyurtio/openyurt/pull/265), each cache-agent can only have the cache of one type of list for one resource. Considering that if we update cache using items in list object one by one, it will result in some cache objects not being deleted. Thus, in `saveListObject`, it will replace all objects under the resource directory with the items in the response of the list request. It works well when the CacheManager uses DiskStorage, because cache for different components are stored at different directory, for example, service cache for kubelet is under `/etc/kubernetes/cache/kubelet/services`, service cache for kube-proxy is under `/etc/kubernetes/cache/kube-proxy/services`. Replacing the service cache of kubelet has no influence on service cache of kube-proxy. But when using Yurt-Coordinator storage, services for all components are cached under `/registry/services`, if replacing all the entries under `/registry/services` with items in the response of list request from kubelet, the service cache for kube-proxy will be overwritten.

### 3.2 Definition of Store Interface is not explicit

Expand Down Expand Up @@ -126,7 +126,7 @@ The **Policy layer** takes the responsibility of cache policy, including determi

The **Serialization layer** takes the responsibility of serialization/unserialization of cached objects. The logic in this layer is related to Kubernetes APIMachinery. The byte formats it needs to concern include json, yaml and protobuf. The types of objects it needs to concern include kubernetes native resources and CRDs. Currently, the component in this layer is StorageWrapper.

The **Storage Frontend** layer serves like a shim between the Serialization layer and Stroage Backend layer. It should provide interface to cache objects shielding the differences among different storages for the upper-layer. It also takes the responsibility of implementation of KeyFunc. Currently, the component in this layer is DiskStorage. We can add more storage in this layer later, such as Yurt-Coordinator Storage.
The **Storage Frontend** layer serves like a shim between the Serialization layer and Storage Backend layer. It should provide interface to cache objects shielding the differences among different storages for the upper-layer. It also takes the responsibility of implementation of KeyFunc. Currently, the component in this layer is DiskStorage. We can add more storage in this layer later, such as Yurt-Coordinator Storage.

The **Storage Backend layer** is the entity that interacts with the storage to complete the actual storage operation. It can be implemented by ourselves, such as FS Operator, or be provided by third-party, such as clientv3 pkg of etcd.

Expand Down Expand Up @@ -321,13 +321,13 @@ func (fs *FileSystemOperator) Rename(oldPath string, newPath string) error
## 7. How to solve the above problems

| Problem | Solution |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 3.1.1 | add rv parameter to Update func in Store interface, the storage will take the responsibility to compare the rv and update the cache, which makes it easy to implement tht atomic operation |
| 3.1.2 | |
| ------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 3.1.1 | add rv parameter to Update func in Store interface, the storage will take the responsibility to compare the rv and update the cache, which makes it easy to implement the atomic operation |
| 3.1.2 | |
| 3.1.3 | use DeleteComponentResources instead of DeleteCollection, and pass the component name as argument rather than rootKey |
| 3.1.4 | use ReplaceComponentList instead of Replace, and pass component, resource, namespace as arguments rather than rootKey |
| 3.2.1 | distinguish the responsibility between Create and Update in Store interface |
| 3.2.2 | same as 3.1.3, explicitly define that DeleteComponentResources is used to delete the cache of the component |
| 3.3.1 | move the logic of in-memory cache from StorageWrapper to CacheManager |
| 3.3.2 | same as 3.1.2 |
| 3.4 | Other non-cache related components should use FS Opeartor instead of DiskStorage |
| 3.4 | Other non-cache related components should use FS Operator instead of DiskStorage |
4 changes: 2 additions & 2 deletions docs/proposals/20220725-pod-recovery-efficiency-proposal.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,9 +242,9 @@ Step 3: start the container.
Step 4: execute the post start hook.
```

On the basis of the pod start procedure by Kubelet, when the edge nodes restart and Kubelet initialized and start, YurtHub will start to work first. According to YurtHub relys on host network, it can be started without CNI start. There will be 1s between Kubelet started and YurtHub started. Also, there are 1.5s between YurtHub started and YurtHub server work. After YurtHub server work, it plays the role of apiserver in the weak network condition.
On the basis of the pod start procedure by Kubelet, when the edge nodes restart and Kubelet initialized and start, YurtHub will start to work first. According to YurtHub relies on host network, it can be started without CNI start. There will be 1s between Kubelet started and YurtHub started. Also, there are 1.5s between YurtHub started and YurtHub server work. After YurtHub server work, it plays the role of apiserver in the weak network condition.

The recovery of nginx pods are blocked in `createSandBox` because they relys on CNI, and flannel as the CNI plugin is not ready.
The recovery of nginx pods are blocked in `createSandBox` because they rely on CNI, and flannel as the CNI plugin is not ready.

```
Aug 26 16:04:28 openyurt-node-02 kubelet[1193]: E0826 16:04:28.209598 1193 pod_workers.go:191] Error syncing pod 464fc7d4-2a53-4a20-abc3-c51a919f1b1a ("nginx-06-78df84cfc7-b8fc2_default(464fc7d4-2a53-4a20-abc3-c51a919f1b1a)"), skipping: failed to "CreatePodSandbox" for "nginx-06-78df84cfc7-b8fc2_default(464fc7d4-2a53-4a20-abc3-c51a919f1b1a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-06-78df84cfc7-b8fc2_default(464fc7d4-2a53-4a20-abc3-c51a919f1b1a)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"ec15044992d3d0df0185a41d00adaca0fa7895f8ac717399b00f24a68ae3fa3e\" network for pod \"nginx-06-78df84cfc7-b8fc2\": networkPlugin cni failed to set up pod \"nginx-06-78df84cfc7-b8fc2_default\" network: open /run/flannel/subnet.env: no such file or directory"
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/20220901-add-edge-autonomy-e2e-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ As a developer of Openyurt, I want to get instant e2e-test-feedback after I made
As a user of Openyurt, I want to make it clear, when I debug, whether it's the Openyurt edge-autonomy-modules are designed with problems, or it's other problems such as something wrong with my kubeadm cluster.

### Implementation Details
- Ajusting e2e-tests framework
- Adjusting e2e-tests framework
The e2e-tests will be carried out in a kind cluster of one cloud node and two edge nodes, the components are organized as follows:

<div align="center">
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/20220910-enhancement-of-servicetopology.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ If endpointslice or endpoints can be changed along with nodepool or service, the
To make servicetopology filter in Yurthub work properly when service or nodepool change, we need two controllers, one for endpoints and another for endpointslice.

### Endpoints controller
The endpoints contoller will watch the change of service and nodepool, the event handlers will enqueue the necessary endpoints to the workqueue of controller, then the controller can modify the trigger annotation `openyurt.io/updateTrigger` for the endpoints. The value of trigger annotation is a timestamp, when the annotation of endpoints is modified, then the servicetopology filter can sense the change of endpoints and will get the latest service and nodepool when filtering.
The endpoints controller will watch the change of service and nodepool, the event handlers will enqueue the necessary endpoints to the workqueue of controller, then the controller can modify the trigger annotation `openyurt.io/updateTrigger` for the endpoints. The value of trigger annotation is a timestamp, when the annotation of endpoints is modified, then the servicetopology filter can sense the change of endpoints and will get the latest service and nodepool when filtering.

#### ·Service event handler

Expand Down Expand Up @@ -123,7 +123,7 @@ func (e *EnqueueEndpointsForNodePool) Update(evt event.UpdateEvent,
}
```
### EndpointSlice controller
The endpointslice contoller will watch the change of service and nodepool, the event handlers will enqueue the necessary endpointslices to the workqueue, then the controller can modify the trigger annotation `openyurt.io/updateTrigger` for those endpointslices.
The endpointslice controller will watch the change of service and nodepool, the event handlers will enqueue the necessary endpointslices to the workqueue, then the controller can modify the trigger annotation `openyurt.io/updateTrigger` for those endpointslices.

#### ·Service event handler
When the servicetopology configuration in service.Annotations is modified, the handler will enqueue all the endpointslices of that service.
Expand Down
Loading

0 comments on commit 8c3282c

Please sign in to comment.