Deploying a httpd application on the K8s having PVCs, PVs, Services(NodePort), and NFS-Dynamic centralized storage provisioner.
The Job DSL plugin attempts to solve this problem by allowing jobs to be defined in a programmatic form in a human-readable file. Writing such a file is feasible without being a Jenkins expert as the configuration from the web UI translates intuitively into code.
A pre-installed K8s cluster(e.g. minikube). In minikube by default, there is no
internal NFS dynamic provisioner is available for the storage class so it can claim a PVC or PV dynamically. so we are creating a NFS-client dynamic provisioner…
Supervise.ly is a powerful platform for computer vision development, where individual researchers and large teams can annotate and experiment with datasets and neural networks.
Tasks to be created:-
Create a project designed to solve the real use case,
using either transfer learning example existing Mask-RCNN, VGG16, etc.
or creating new model of Mask-RCNN, GANs, RNN, etc. to solve any real
case problems or new problems.
1. Make your own custom dataset using supervisely
2. Either create a new model or using existing model as transfer learning
3. Launch the training on aws cloud
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. EKS runs upstream Kubernetes and is certified Kubernetes conformant so you can leverage all benefits of open source tooling from the community. You can also easily migrate any standard Kubernetes application to EKS without needing to refactor your code. Current EKS supports K8s v 1.14, 1.15, 1.16(default).
Prerequisite:- A station is required where aws cli v2, eksctl, and kubectl commands preconfigured. An IAM user with enough policies so that it can create an AWS EKS cluster and AWS EFS file system in the same VPC. …
Tasks to be created:-
1. Create container image that has linux distribution and other basic configuration required to run a cloud worker node for Jenkins.
(e.g. Here we require kubectl to be configured inside that node.)
2. When we launch the job it should automatically starts job on cloud worker node based on the labels provided for dynamic approach to run the jobs.
3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins.
4. Job1 :- Pull the Github repo automatically when some developers push repo to Github(using local hooks and web-hooks) and perform the following operations as:
Tasks to be created:-
1. Create container image that’s has Jenkins installed using Dockerfile Or You can use the Jenkins Server on RHEL 8/7
2. When we launch this image, it should automatically starts Jenkins service in the container.
3. Create a job chain of job1, job2, job3 and job4 using build pipeline plugin in Jenkins
4. Job1: Pull the Github repo automatically when some developers push repo to Github.
5. Job2 :
1. create a persistent volume claim.
2. create service for the application.
3. create a deployment for the application.
6. Job3: Test your app if it is working or not.
Creating AWS infrastructure ( CloudFront + S3+ EC2 Instances) using the Terraform tool with the HCL(HashiCorp Language) scripts and ansible engine is used for infrastructure configuration management.
Pre-requisites:- Preconfigured AWS CLI, ansible engine, Terraform CLI, IAM-user with administrative powers.
[root@server terraform]# aws configure
AWS Access Key ID [********************]:
AWS Secret Access Key [********************]:
Default region name [ap-south-1]:
Default output format [None]:[root@server terraform]# ansible --version
config file = /root/terraform/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Jan 11 2019, 02:17:16) [GCC 8.2.1 20180905]…
Using Keras and VGG16(pre-trained model already trained on ImageNet dataset) used for transfer learning.
# pip install -r requirements.txt
> jupyter notebok
Clone the repo
# git clone https://github.com/A4ANK/face_recognition_Transfer_Learning.git In repo directory, create two sub directories
# mkdir train
# mkdir test Similarly, also create subdirectories inside test and train directories for subcategories that you want to predict, respectively. Now, collect dataset using face_extractor.ipynb notebook
Load the VGG16 pre-trained model inside Transfer Learning.ipynb Use transfer learning to train this pre-trained model on the new smaller dataset created using face_extractor.ipynb notebook.
Run Real Time Face recognizer.ipynb notebook to perform.
Three jobs are needed for simulating this project.
For deploying testing environment on the top of docker using git hooks(post-commit) when any commits are done from the featured branch( other than master(main branch)) and the job is scheduled using PollSCM.
git hooks => post-commit script vi .git/hooks/post-commit
#!/bin/bash echo “First and then Post Commit Tasks are started”
echo git push is done to the current Remote Branch”
#echo “remote Build Trigger using jenkins URL”
#curl — user “username:password” <http:///job/job3/build?token=TOKEN>
— — — — — — — — — — — — — — —…
I'm a computer science undergraduate and my primary area of work is under Linux, CloudComputing, DevOps culture, and various open-source tools and technologies