Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/workflows/github-actions-demo.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: GitHub Actions Demo
run-name: ${{ github.actor }} is testing out GitHub Actions 🚀
on: [push]
jobs:
Explore-GitHub-Actions:
runs-on: ubuntu-latest
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
- name: Check out repository code
uses: actions/checkout@v4
- run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
- run: echo "🖥️ The workflow is now ready to test your code on the runner."
- name: List files in the repository
run: |
ls ${{ github.workspace }}
- run: echo "🍏 This job's status is ${{ job.status }}."
14 changes: 14 additions & 0 deletions .github/workflows/lab9.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
name: GitHub Actions Demo
run-name: ${{ github.actor }} is testing out GitHub Actions 🚀
on: [workflow_dispatch]
jobs:
Explore-GitHub-Actions:
runs-on: ubuntu-latest
steps:
- name: "Collect information about the runner."
run: touch info.txt && lsb_release -a >> info.txt && lshw -short -sanitize >> info.txt
- name: "Upload artifact"
uses: actions/upload-artifact@v4
with:
name: System Information
path: info.txt
68 changes: 68 additions & 0 deletions Lab1/submission1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Lab 1: Introduction to DevOps with Git

## Task 1: SSH Commit Signature Verification

**Objective**: Understand the importance of commit signing using SSH keys and set up commit signature verification.

1. **Explore the Importance of Signed Commits**:
- **Research**: Learn why commit signing is crucial for verifying the integrity and authenticity of commits.
- Commits are made to the different project of different size, value and importancy. For some local projects of one person or small group commit signing might be redundant and will only lead to spending unnecessary time on useless work.
On the other hand, there are some middle size teams and projects. In this case signing is suggested to divide responsibility for the made changes on certain people, who were making commits. With commit signing it is almost impossible (I don't like using "completely impossible" in the context of IT, since each year people prove that nothing is fully safe) to say, that someone changed their commits, even if they were made a long time ago.
The third case is the projects of huge size (like Linux kernel) projects or projects of high responsibility (autopilot for planes), where even a small mistake can lead to huge fails or critical backdoors. In this case only a small group of users can commit or appove commits into high-secured branches. In this case signiture of commits is necessary for the safety reasons.

2. **Set Up SSH Commit Signing**:
- **Option 2: Generate a New SSH Key (Recommended: ed25519 Format)**:
- Generate a new SSH key pair using the ed25519 format.

```sh
ssh-keygen -t ed25519 -C "your_email@example.com"
```

- Add the public key to your GitHub account.
- [GitHub Guide to Adding SSH Key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)

- Configure Git to use your new SSH key for signing commits.

```sh
git config --global user.signingkey <YOUR_SSH_KEY>
git config --global commit.gpgSign true
git config --global gpg.format ssh
```

3. **Make a Signed Commit**:
- Create and sign a commit.

```sh
git commit -S -m "Your signed commit message"
```

- Push the commit with your submission1.md file.

## Task 2: Merge Strategies in Git

**Objective**: Research the differences between merge strategies in Git and modify repository settings to allow only the standard merge strategy.

1. **Research Merge Strategies**:
- **Standard Merge**: Combines two branches by creating a merge commit.
- Pros:
- Saves full history of commits and their relationships. It makes history more understandable.
- Cons:
- Since we save the full history plus some additional merge commits, the log might be huge, making it harder to follow.
- **Squash and Merge**: Combines all commits from a feature branch into a single commit before merging.
- Pros:
- More compressed history.
- Cons:
- Losing the connection to the feature branch.
- **Rebase and Merge**: Reapplies commits from a feature branch onto the base branch.
- Pros:
- Produces very straightforward and simple history with no extra commits.
- Cons:
- It rewrites the history, which is not good. Also requires forced push, which is simply dangerous.
- **Summary**: Standard merge is prefered in collaborative environments since it saves all the history, which is crucial.

2. **Modify Repository Settings**:
- **Disable Squash and Rebase Merge**:
- Go to the Settings page of your forked repository on GitHub.
- Navigate to the "Options" section.

- Status: Done
68 changes: 68 additions & 0 deletions Lab10/submission10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Cloud Computing Lab - Artifact Registries and Serverless Computing Platforms

In this lab assignment, you will research and compare artifact registries and serverless computing platforms in AWS, GCP, and Azure. You will document your findings in a single Markdown file, providing information on popular artifact registries and the best serverless computing platforms in each cloud platform. Follow the tasks below to complete the lab assignment.

## Task 1: Artifact Registries Research

### **What is an artifact registry?**:
To begin with we need to understand what is an artifact registry and why we need them.
Artifact in IT context are some products as results of compilation, building or creation of some code base. These are the good examples of artifacts:

- Program installers (.exe, .apk, etc)
- Docker containers
- Prebuild libraries
- Packages

So, since we have different artifacts, we need to store them. This is the reason why we might need Artifact Registries. Their main tasks:
- Storing artifacts in the cloud
- Version control of artifacts
- Security
- Integration with CI/CD.

And now we need to find out which Artifact Registry is better for our task by comparing their key features. To begin with, as with the whole cloud infrastructure, we have 3 main players:
- AWS (Amazon)
- Google
- Azure (Microsoft)

### Now let's look at them closer.

1) **AWS CodeArtifact**
- Integration with other AWS services. Which is quite important for some big companies and projects since AWS is the biggest (if I am not mistaken) Cloud ecosystem currently.
- Security through good accessibility control.
- Publishing and spreading of packages is made as simple as possible.
- Pay-as-you-go pricing

2) **GCP Artifact Registries**
- Integration with Google Cloud services
- Centralized storage for artifacts and dependencies. It makes working with it easier due to the unified interface.
- High-level security

3) **Azure Container Registry**
- Integration with other Microsoft products.
- Geo-replication. It makes sharing your product easier and smoother all around the world.
- Good security measures, including Private Link.
- Supports all types of files.

## Task 2: Serverless Computing Platform Research

### What is Serverless Computing Platforms

Serverless computing platforms are the platforms which allow developers to run server-side tasks (like CI/CD, deployment, heavy compilation) to the cloud without the need to manage it. So, you don't need to hire special server admin. It is not the replacement of the servers, but can be applicable in some tasks.

### Now about the big-3:
1) **AWS Lambda**
- Integration with other AWS services.
- Automatic Scaling, allowing to not think about the future scaling of a project.
- Very good resource management settings. You can set up almost everything as you want, making it very flexible for almost any task.
- Pay-as-you-go pricing

2) **Google Cloud Functions**
- Low latency is one of the main goal the Google team tries to achieve. It can be crucial for some tasks.
- Fast deployment. This is another priority the team is seeking. Quite good for small- and middle-sized projects.
- Big variety of very useful and easy-to-use monitoring tools.
- Paying only for the resources taken for your tasks. Also free-to-use for some very liht workloa, which is basically free to implement, pay to use paying system.

3) **Azure Functions**
- Durable Functions. This allows to create stateful workflows, which makes the service even more powerful.
- Huge variety of triggers and bindings to choose.
- Pay-as-you-go pricing
Binary file added Lab2/lab2_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab2/lab2_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab2/lab2_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab2/lab2_4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
84 changes: 84 additions & 0 deletions Lab2/submission2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# DevOps Tool Exploration

In this lab, you will explore essential DevOps tools and set up a project on the Fleek service. Follow the tasks below to complete the lab assignment.

## Task 1: Set Up an IPFS Gateway Using Docker

**Objective**: Understand and implement an IPFS gateway using Docker, upload a file, and verify it via an IPFS cluster.

1. **Set Up IPFS Gateway**:
- Install Docker on your machine if it's not already installed.
- It was installed

- Pull the IPFS Docker image and run an IPFS container:

Pull docker IPFS image
```sh
docker pull ipfs/go-ipfs
```
Building container of the downloaded image. Using -v flag we create virtual volumes in our container, which are just links to some folders in the host system. -d is used to run container in background and not occupy the terminal. --name - specify a name for the container. -p - we make a mapping, that port 8080, for example, in container is working with port 8080 on the host machine.
```sh
docker run -d --name ipfs_host -v /path/to/folder/with/file:/export -v ipfs_data:/data/ipfs -p 8080:8080 -p 4001:4001 -p 5001:5001 ipfs/go-ipfs
```

- Verify the IPFS container is running:

```sh
docker ps
```

```
64c95467573c ipfs/go-ipfs "/sbin/tini -- /usr/…" 15 seconds ago Up 15 seconds (healthy) 0.0.0.0:4001->4001/tcp, 0.0.0.0:5001->5001/tcp, 4001/udp, 0.0.0.0:8080->8080/tcp, 8081/tcp ipfs_host
```

2. **Upload a File to IPFS**:
- Open a browser and access the IPFS web UI:

```sh
http://127.0.0.1:5001/webui/
```

- Explore the web UI and wait for 5 minutes to sync up with the network.
- Upload any file via the web UI.
- Use the obtained hash to access the file via any public IPFS gateway. Here are a few options:
- [IPFS.io Gateway](https://ipfs.io/ipfs/)
- [Cloudflare IPFS Gateway](https://cloudflare-ipfs.com/ipfs/)
- [Infura IPFS Gateway](https://ipfs.infura.io/ipfs/)

- Append your file hash to any of the gateway URLs to verify your file is accessible. Note that it may fail due to network overload, so don't worry if you can't reach it


Worked only on local for some reason.
![image](lab2_3.png)

3. **Documentation**:
- Create a `submission2.md` file.
- Share information about connected peers and bandwidth in your report.
![image](lab2_1.png)
![image](lab2_2.png)
- Provide the hash and the URLs used to verify the file on the IPFS gateways.
QmYgf7rbovaD3DK9cK3WzDwD1S4EyM4jj5efYLtr7BQuhs
http://bafybeiezwtzmzm6cj7ylsoq6matuo5l7gnns7qqcrbucmmnfwvlbv6wvji.ipfs.localhost:8080/
https://dweb.link/ipfs/QmYgf7rbovaD3DK9cK3WzDwD1S4EyM4jj5efYLtr7BQuhs

## Task 2: Set Up Project on Fleek.co

**Objective**: Set up a project on the Fleek service and share the IPFS link.

1. **Research**:
- Understand what IPFS is and its purpose.
IPFS - decentralized File System, which addresses data by its content, not location. It makes data available from any point of internet and cannot be sensored.
- Explore Fleek's features.


2. **Set Up**:
- Sign up for a Fleek account if you haven't already.
- Use your fork of the Labs repository as your project source. Optionally, set up your own website (notify us in advance).
- Configure the project settings on Fleek.
- Deploy the Labs repository to Fleek, ensuring it is uploaded to IPFS.
![image](lab2_4.png)

3. **Documentation**:
- Share the IPFS link and domain of the deployed project in the `submission2.md` file.
https://ipfs.io/ipfs/bafybeienfnnln4emucoaziqfnjrod6fhd4wenubg6ws4zezvubgprm5ltq/

Binary file added Lab3/lab3_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab3/lab3_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab3/lab3_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Lab3/lab3_4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
136 changes: 136 additions & 0 deletions Lab3/submission3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# Version Control

In this lab, you will learn about version control systems and their importance in collaborative software development. You will specifically focus on Git, one of the most widely used version control systems. Follow the tasks below to complete the lab assignment.

## Task 1: Understanding Version Control Systems

**Objective**: Understand how Git stores data.

1. **Create and Explore a Repository**:
- Use the current repository and make a few commits.
- Use `git cat-file` to inspect the contents of blobs, trees, and commits.

-
```sh
git cat-file -p <blob_hash>
```
```sh
Here was printed the file content, too big to put here.
```
-
```sh
git cat-file -p <tree_hash>
```
```
100644 blob ede183da8ef201e5f5737eea502edc77fd8a9bdc README.md
040000 tree 0ae774b180e2f58aeec5d66d1b3407c9c9cdd5cc Solutions
040000 tree b926a085a558de1861b03912b887736724b46292 images
100644 blob 5738bc15a0416ad2624df13badfb235052777e79 index.html
100644 blob 1dba99957c3bb59d40913294b83e40d5c38b6c0b lab1.md
100644 blob bf5553698071098c4cb429db6b14811d4316e822 lab10.md
100644 blob 1b99cc0044f93f556a0f6a599c7edf2f33f4944a lab2.md
100644 blob 2f8463cc188ec6ca69ae7a0f98d38e132280becb lab3.md
100644 blob d66a6867f90e48f6f44d9d80821aa1d866a24882 lab4.md
100644 blob 2ff5995a25b74c9c02a143c09a9601ce66001a9f lab5.md
100644 blob 793bb19cd158fae333205f524eba5adc16718c58 lab6.md
100644 blob e3daa92d57248cbfb76d60de86f7b2e0da7e9a22 lab7.md
100644 blob 0a88f0778b2534da7d9208198d1f2de010ca7459 lab8.md
100644 blob 15ab5a07323c525efeda1f3ce737612852e02f2b lab9.md
```
-
```sh
git cat-file -p <commit_hash>
```
```sh
tree 53fa5326dd73918edd7bfdd614c5c5725b86670e
parent 47e85d158914eee28418c250e813cb59c26f5d21
author Stepan Kuznetsov <Stepan14511@gmail.com> 1731404522 +0300
committer Stepan Kuznetsov <Stepan14511@gmail.com> 1731404522 +0300
gpgsig -----BEGIN SSH SIGNATURE-----
sensored
-----END SSH SIGNATURE-----

Lab2
```

- Create a `submission3.md` file.
- Provide the output in the `submission3.md` file.

## Task 2: Practice with Git Reset Command

**Objective**: Practice using different ways to use the `git reset` command.

1. **Create a New Branch**:
- Create a new branch named "git-reset-practice" in your Git repository.

```sh
git checkout -b git-reset-practice
```
```sh
Switched to a new branch 'git-reset-practice'
```

2. **Explore Advanced Reset and Reflog Usage**:
- Create a series of commits.

```sh
echo "First commit" > file.txt
git add file.txt
git commit -m "First commit"

echo "Second commit" >> file.txt
git add file.txt
git commit -m "Second commit"

echo "Third commit" >> file.txt
git add file.txt
git commit -m "Third commit"
```

![image](lab3_1.png)
P.s.: style used by me could be found [here](https://stackoverflow.com/a/9074343)

- Use `git reset --hard` and `git reset --soft` to navigate the commit history.

```sh
git reset --soft HEAD~1
```
![image](lab3_2.png)
```sh
git reset --hard HEAD~1
```

![image](lab3_1.png)

- Use `git reflog` to recover commits after a reset.

```sh
git reflog
```
- ```sh
470b1ac (HEAD -> git-reset-practice) HEAD@{0}: reset: moving to HEAD~1
bfaeda7 HEAD@{1}: reset: moving to HEAD~1
a02d43f HEAD@{2}: commit: Third commit
bfaeda7 HEAD@{3}: commit: Second commit
470b1ac (HEAD -> git-reset-practice) HEAD@{4}: commit: First commit
330e2a9 (origin/master, origin/HEAD, master) HEAD@{5}: checkout: moving from master to git-reset-practice
330e2a9 (origin/master, origin/HEAD, master) HEAD@{6}: commit: Lab2
47e85d1 HEAD@{7}: commit: Lab1
b8ca852 HEAD@{8}: clone: from https://github.com/stepan14511/Sum24-intro-labs
```
```sh
git reset --hard <reflog_hash>
```

![image](lab3_4.png)

3. **Documentation**:

`git chekout, add, commit` seems like something that doesn't need explanation.
Next step we made was `git reset --soft/hard`.
- `git reset --soft` - moves head to other commit. `HEAD~1` is a link to 1 commit before HEAD. Changing number we can go back on any amount of commits.
- `git reset --hard` - do the same, but discards all the changes in the working directory or staging area. Should be used wisely or can lead to data loss.

Next step was `git reflog`. This command shows the logs, which save the commits, which were referenced by HEAD, branches, tags or other pointers. Most widely used to retrieve the hash of a deleted commit to recover it. But may be also used for some other tasks.

`git reset --hard <reflog_hash>` - command to point the HEAD to some specified commit. In our case we retrieved the hash of our Third commit and fixed the pointer of our HEAD.
Loading