Wednesday, 4 October 2017

DevOps for Engineers



DevOps  for  Engineers

 

Objective : DevOps, or the collaboration between development and operations teams, is an important component of companies today. Developing and implementing a DevOps culture helps to focus IT results and to save time and money as the gap between developers and IT operations teams closes. Just as the term and culture are new, so are many of the best DevOps tools these DevOps engineers use to do their jobs efficiently and productively. Tools are used in devops cycle for configuration management, security, monitoring, automation and logging .

What is the need for DevOps?

This has many advantages like quick feedback from customers, better quality of software etc. which in turn leads to high customer satisfaction. To achieve this, companies are required to:
1.      Increase deployment frequency
2.      Lower failure rate of new releases
3.      Shortened lead time between fixes
4.      Faster mean time to recovery in the event of new release crashing

Best Operating System  for DevOps

Linux

Prerequisite

Scripting language  (Shell script linux / Python /ruby  etc..)  + domain  knowledge  of networking , storage , security ,data centre  and  respective area ..

Best site  for learning  devops

https://linuxacademy.com/devops/courses

 

TOP Devops Technology

Jenkins,  Docker, Chef, Ansible, Puppet, Nagios ,Git, Nagios

  • Git : Version Control System tool
  • Jenkins : Continuous Integration tool
  • Selenium : Continuous Testing tool
  • Puppet, Chef, Ansible : Configuration Management and Deployment tools
  • Nagios : Continuous Monitoring tool
  • Docker : Containerization tool

 

DEVOPS Certification

https://www.fastlaneus.com/certification/RHCA-DEVOPS

To earn the RHCA: DevOps title you must pass these exams:
·       EX270 – Red Hat Certificate of Expertise in Container Management exam - coming soon

 

Periodic-table-of-devops-tools

https://xebialabs.com/periodic-table-of-devops-tools/

 

Network Engineer -> DevOps Engineer

From a personal growth perspective, I would recommend learning scripting first. You will be able to leverage scripting into automation in any role; you as a single individual might be able to effect change across 100 devices in an hour, with only a few errors. Your scripts can scale to tens of thousands, with zero errors -- so long as you test them! Nothing scales like fail!

Using any sort of scriping to handle repeatable network administration tasks is always a good place to start. Whether bash, expect, python, ruby doesn't matter at first. Eventually depending on what sort of shop you're looking for learning Puppet or Chef for network administration will really help. They both work with Cisco gear and I'm sure they work with other devices.

Try learning Bash first. It's relatively easy, and most Ops people can understand it. Try to write scripts that configure network devices. Or say, connects to all the routers and stores a statistic and prints the results. Learn how to set up a git repository, and how to store your scripts in it. Learn about Gitlab. How could you use git with your networking devices? Wouldn't it be nice to have a Time Machine view of all of your network configs? Since most configs are the same, could you make a script that just changed parameters in a generic config template, depending upon which device were to receive that template? Are there tools out there that do this already? Take a look at Puppet, Chef, and Ansible.

Server configuration management tools  : Puppet, Chef, Ansible and Salt, for example—are server configuration management tools. They are primarily used for the provisioning and ongoing configuration management of servers. They apply changes to infrastructure using an “infrastructure as code” approach where the “desired state” of infrastructure is stored in a repository—usually a source control system—and updates are applied to managed systems to ensure they are in compliance with the infrastructure’s “desired state.”

Changes to these systems should be made using the config management tool interface/language, with the automation routines normally written in Ruby or Python and configurations exposed as properties in something like JSon or YAML. Any changes made directly on a target server, will likely be reverted to the “desired state” definition when the agents or systems next synchronize with the master.

They can also be used to deploy and manage certain types of applications very effectively. This is I believe is the route of much of the confusion, as some organizations/applications use configuration management systems for provisioning, infrastructure configuration and application deployments. They normally do this by deploying system and application changes in exactly the same way

 

Cycle OF DevOps

CODE ó Test and build ó package your application(Containers) ó Configuration Management ó Release automation ó Monitor the deployment ó Respond ,  Learn and improve


 

 

USE Case ( network automation , configuration management ) and e-commerce website

Making the case for network automation and configuration management :
Are you the kind of person that has to build out configurations? Maybe load OSs on new devices or maybe perform an OS upgrade once or twice per year? Maybe you need to quickly deploy a new change across several devices? And you wish you had a tool that offered customization that made it relevant for your environment.
Initial device configuration – ever have to unbox a bunch of new gear and configure each device individually? Maybe there was a bunch of copying and pasting between text files during this process? Ansible can be used for “network build automation” as Schulman likes to call it. This can be as simple as using Ansible to create the finalized text files for you and you still load them onto the devices or use Ansible for config building + deploying.
Deploying configuration across one or more devices – this is when you’d want to deploy a simple change across the network. Or maybe you just need to update one device. It doesn’t matter – either can be done with Ansible.
Workflow Automation – combine multiple network configurations and create custom specific workflows by leveraging one or more playbooks and even intregrate to workflows that include system and application changes. This is where Ansible really starts to show its power.

Explain with a use case where DevOps can be used in industry / real-life (e-commerce website)

Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items. Etsy struggled with slow, painful site updates that frequently caused the site to go down. It affected sales for millions of Etsy’s users who sold goods through online market place and risked driving them to the competitor.
With the help of a new technical management team, Etsy transitioned from its waterfall model, which produced four-hour full-site deployments twice weekly, to a more agile approach. Today, it has a fully automated

How do all these tools work together?

 

Given below is a generic logical flow where everything gets automated for seamless delivery. However, this flow may vary from organization to organization as per the requirement.
1.    Developers develop the code and this source code is managed by Version Control System tools like Git etc.
2.    Developers send this code to the Git repository and any changes made in the code is committed to this Repository.
3.    Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven.
4.    Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium.
5.    Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet).
6.    After deployment It is continuously monitored by tools like Nagios.
7.    Docker containers provides testing environment to test the build features.
devops tools - devops interview questions

 

Cycle Unit OF Devops

Details

TOP TOOLS

Code

 

After you’ve written some code, there’s a few things that need to happen before getting it into staging. Get it code reviewed and approved, merge it into the master branch in a version control repository, and run any local tests as needed.

Top tools: Github, Bitbucket, Gerrit, GitLab

Test and build

 

Now it’s time to automate the execution of tasks related to building, testing, and releasing code. Before the build can get deployed, it needs to undergo a number of tests to ensure that it’s safe to push to production: unit tests, integration tests, functional tests, acceptance tests, and more. Tests are a great way to ensure that existing pieces of your codebase continue to function as expected when new changes are introduced. It’s important to have tests that run automatically whenever there’s a new pull request. This minimizes errors that escape because of manual oversight, reduces the cost of performing reliable tests, and exposes bugs earlier.

There are also a number of great open source and paid tools that do useful things once the tests are complete, like automatically picking up changes to the master and pulling down dependencies from a repository to build new packages.

 

Top tools include: Jenkins, GoCD, Maven, CruiseControl, TravisCI, CircleCI

Containers and Schedulers

 

With the advent of Docker and containers, teams can now easily provision lightweight, consistent, and disposable staging and development environments without needing to spin up new virtualized operating systems.

Containers standardize how you package your application, improving storage capacity and flexibility, and making it easier to make changes faster. This also enables your application to run anywhere. In other words, things will magically behave in production exactly as they did when you made the changes on your laptop.

 

Top tools include: Docker, Kubernetes, Mesos, Nomad

 

Configuration Management

 

With configuration management, you can track changes to your infrastructure and maintain a single source of system configuration. Look for a tool that makes it easy to version control and make replicas of images — i.e. anything you can take a snapshot of like a system, cloud instance, or container. The goal here is to ensure standardized environments and consistent product performance. Configuration management also helps you better identify issues that resulted from changes, and simplifies autoscaling by automatically reproducing existing servers when more capacity is needed.

 

Top tools include: Chef, Ansible, Puppet, SaltStack

 

Release automation

 

Release automation tools enable you to automatically deploy to production. They should include capabilities such as automated rollbacks, copying artifacts to the host before starting the deployment, and especially if you’re a larger organization, agentless architecture to easily install agents and configure firewalls at scale to your server instances.

Note that if something passes the tests, it typically automatically gets deployed. One best practice is to perform a canary deployment first that deploys to a subset of your infrastructure, and if there are no errors, then do a fleet wide deploy.

A lot of teams also use chat-based deployment workflows, using bots to deploy with simple commands, so everyone can easily see deployment activity and learn together.

 

Top tools include: Bamboo, Puppet

 

Monitor the deployment

 

It can be really helpful to have release dashboards and monitors set up that help you visualize high-level release progress and status of requirements. It’s also key to understand whether services are healthy and if there are any anomalies before, during, and after a deploy. Make sure you are notified in real time on key events that take place on your continuous integration servers so you know if there’s a failed build, or know to hold or roll back on a deploy.

 

Top tools include: Datadog, Elastic Stack, PagerDuty

 

Monitor

Server monitoring

Server monitoring gives you an infrastructure-level view. A lot of teams also use log aggregation to drill down into specific issues. This type of monitoring enables you to aggregate metrics (such as memory, CPU, system load averages, etc.) and understand the health of your servers so that you can take action on issues, ideally before applications — and the customers that use them — are affected.
Application performance monitoring
Application performance monitoring1 provides code-level visibility of the performance and availability of applications and business services, such as your website. This makes it easy to quickly understand performance metrics and meet service SLA’s.

Server Monitoring :
Top tools include: Datadog, AWS Cloudwatch, Splunk, Nagios, Pingdom, Solarwinds, Sensu

Application performance monitoring

Top tools include: New Relic, Dynatrace, AppDynamics

Respond and Learn

 

Monitoring tools provide a lot of rich data, but that data isn’t useful if it isn’t routed to the right people who can take the right actions on an issue in real time. To minimize downtime, people must be notified with the right information when issues are detected, have well-defined processes around triage and prioritization, and be enabled to engage in efficient collaboration and resolution.

When application and performance issues now often cost thousands of dollars a minute, orchestrating the right response is often highly stressful, but it can’t afford to be chaotic. In the middle of a fire, you don’t want to waste half an hour pulling up a contact directory and trying to figure out how to get the right people on a conference bridge.

The good news is, PagerDuty automates the end-to-end incident response process to shave time off of resolving both major customer-impacting incidents or daily operational issues. Here at PagerDuty, everyone from our engineering teams, support teams, security teams, executives, and more, uses our product to orchestrate coordinated response to IT and business disruptions. We have the flexibility to manage on-call resources, suppress what’s not actionable, consolidate related context, mobilize the right people and business stakeholders, and collaborate with our preferred tools. If you can easily architect exactly what you want your wartime response to look like, you’ll have a lot more peace of mind.

 

Tools we use: PagerDuty, HipChat, Slack, Conferencing tools

 

Learn and improve

 

When wartime is over, incidents provide a crucial learning opportunity to understand how to improve processes and systems to be more resilient. In accordance with the CAMS pillars of DevOps (Culture, Automation, Measurement, Sharing), it’s important to understand incident response and system performance metrics, and facilitate open dialogue to share successes and failures towards the goal of continuous improvement.
Look for a solution that enables you to streamline post mortems and post mortem analysis1 for the purpose of prioritizing action items regarding what needs to be fixed. You’ll want to measure the success of a service relative to business goals and customer experience metrics, with tools that help you understand product usage and customer feedback. All of these will feed into the next sprint so that you can accurately plan and prioritize both system and feature improvements — for even better product, and happier customers.

Tools we use: PagerDuty Postmortems, Looker, Pendo, SurveyMonkey

Important  Tools


Chef is an extremely popular tool among DevOps engineers, and it’s easy to see why. From IT automation to configuration management, Chef relies on recipes and resources so you can manage unique configurations and feel secure knowing Chef is checking your nodes and bringing them up to date for you.
Key Features:
·       Manage nodes from a single server
·       Cross-platform management for Linux, Windows, Mac OS, and more
·       Integrates with major cloud providers
·       Premium features available
Cost:
·       Essentials: FREE – manage 10,000+ nodes from a single server, cloud integration, access to premium features with up to 25 nodes (hosting up to 5 nodes), and 8×5 support (30 days)
·       Subscription: $6/node/month – all Essentials plan features, plus access to premium features and an account manager
·       Enterprise: – Contact for a quote – all Subscription plan features, plus access to premium features, unlimited 24×7 support, success engineering, cookbook build coaching, and access to chef product team

 


An integrated technology suite enabling DevOps teams to build, ship, and run distributed applications anywhere, Docker is a tool that allows users to quickly assemble apps from components and work collaboratively. This open platform for distributed applications is appropriate for managing containers of an app as a single group and clustering an app’s containers to optimize resources and provide high availability.
Key Features:
·       Package dependencies with your apps in Docker containers to make them portable and predictable during development, testing, and deployment
·       Works with any stack
·       Isolates apps in containers to eliminate conflicts and enhance security
·       Streamline DevOps collaboration to get features and fixes into production more quickly
Cost:
·       Community Edition: FREE
·       Enterprise Edition Basic: Starting at $750/year
·       Enterprise Edition Standard: Starting at $1,500/year
·       Enterprise Edition Advanced: Starting at $2,000/year

 Ansible  : https://www.ansible.com/


Providing the simplest way to automate IT, Ansible is a DevOps tool for automating your entire application lifecycle. Ansible is designed for collaboration and makes it much easier for DevOps teams to scale automation, manage complex deployments, and speed productivity.
Key Features:
·       Deploy apps
·       Manage systems
·       Avoid complexity
·       Simple IT automation that eliminates repetitive tasks and frees teams to do more strategic work
Cost: Contact for a quote

 Puppet Enterprisehttps://puppet.com/solutions/devops


Puppet Enterprise is one of the most popular DevOps tools on the market because it enables teams to deliver technology changes quickly, release better software, and do it more frequently with confidence. Use Puppet Enterprise to manage infrastructure as code and get a solid foundation for versioning, automated testing, and continuous delivery.
Key Features:
·       Deploy changes with confidence
·       Recover more quickly from failures
·       Free your team to be more agile and responsive to business needs
·       Increase reliability by decreasing cycle times
·       Ensures consistency across development, test, and production environments so teams know that changes are consistent and systems are stable when you promote them
Cost: FREE trial available
·       12-Month Puppet Enterprise Subscription for up to 500 Nodes: $3,000 – Standard support and maintenance
·       Contact sales for more than 500 nodes or for premium support

GIT : https://git-scm.com/

As a version control system (VCS) tool, Git helps developers manage their projects with speed and efficiency. It’s free and open-source, which means anyone can use it. One of its signature features is a branching model that allows developers to create multiple local branches, or pointers to a commit, that are independent of one another. Developers can then merge, create, or delete these branches as their infrastructure evolves.

 

An extensible continuous integration engine, Jenkins is a top tool for DevOps engineers who want to monitor executions of repeated jobs. With Jenkins, DevOps engineers have an easier time integrating changes to projects and have access to outputs to easily notice when something goes wrong.
Key Features:
·       Permanent links
·       RSS/email/IM integration
·       After-the-fact tagging
·       JUnit/TestNG test reporting
·       Distributed builds
Cost: FREE

 

Reference URL :

Building a more agile, automated organization with DevOps


Top DevOps Tools: 50 Reliable, Secure, and Proven Tools for All Your DevOps Needs


Top DevOps Questions






 

Friday, 22 September 2017

hydroponic-fodder-system



Introduction Hydroponic Fodder for Goats:- Well, as we all know the feed / fodder / forage is one of the major components of goat farming and the fodder management is a key factor in successful and profitable goat farming. Green fodder plays major role in feed of milch animals, thereby providing required nutrients for milk production and health of the dairy animals. Commercial goat farmers should have enough land to grow green fodder crops such as hybrid grasses and legume crops. When we grow green fodder crops in open fields, it requires labour and other costs to produce the forage in our field. For healthy goat rearing, the fodder should have good nutrients as well. In many places, due to climatic conditions or non-availability of land succulent grass is not available for goats throughout the year in many regions in India. This condition forces to think of different system to produce green fodder for goats throughout the year irrespective of climatic conditions and land availability. This will (hydroponic fodder system) drastically reduce the land cost, feed cost and provides nutrient fodder throughout the year. Today, we are going to talk about hydroponic fodder production system which is different from the conventional fodder production system.

What is Hydroponic Fodder? It is a system in which green fodder or plants are grown in nutrients rich solutions instead of soil. In this system of growing, plants require sunlight, nutrients, and water. This is very much possible providing all required inputs to grow green fodder in hydroponic system without soil under controlled environment (like small green house). One can easily build the hydroponic system on their own or can buy a hydroponic machine to grow green fodder for goats. In hydroponic system, corn / maize, oats, barley, wheat grass, rice / paddy saplings, sorghum can be grown successfully for goats. Goats can use the fodder grown in this system will have grans, roots, stems, and leaves where as in conventionally grown fodder, only stems and roots are used.

Need for Hydroponic Fodder for Goats:- Most of the small scale goat farmers many not have enough land to grow green fodder and even they may not get the fodder throughout the year in conventional system of growing. Even some farmers may not bear irrigation cost required for green fodder production for goat farming. In this situation, goat farmers can utilize highly efficient and nutrient system of growing green fodder for goats hydroponically. By providing nutrient feed in goats on regular basis using hydroponic system will result high yield of milk in dairy goats and quick and healthy meat production in meat goats.

Advantages of Hydroponic Fodder for Goats:- If you are planning for commercial goat farming, you must consider this system due to the following advantages and benefits.



Saving water in goat fodder production: It requires just 2 to 3 liters water to produce 1kg of green forage/fodder when compared to 60-75 liters to conventional system of fodder production. Water wastage can be prevented in hydroponics by recycling to grow the fodder.
Land in goat fodder production: Hydroponics greenhouse requires marginal land to erect the system of 10 meters x 5 meters land for 650 kg green fodder / day/ unit, in comparison to 1 hectare land for conventional fodder growing.

Less labour in goat fodder production: Hydroponic system requires less labour work say 2 to 3 hours/day where as conventional fodder production requires whole day to harvest the fodder.


Growing Green Fodder in Tray System.
Less growing time for goat fodder: In hydroponic system, it requires just 1 week (7 days) to get nutritious fodder from seed germination to fully grown plant of 30 cm height. Biomass conversion ratio is as high as 8 to 9 times to traditional fodder grown for 60-75 days.

Growing fodder throughout the year: Irrespective of climatic conditions and other restrictions, fodder for goats can be grown all around the year to meet the demand and supply.
High nutrients: Hydroponic system facilitates the growth of highly nutritious fodder as compared to conventional fodder production. This very much required for dairy goats and quick meat production of commercial goat farming.
Natural feed for goats: Hydroponics is a natural system of growing fodder for goats. There is no use of pesticides in this system hence it is free from contamination of milk and milk related products.

Losing less fodder: Every part of fodder (grans, roots, stems and leaves) is consumed by goats without losing any feed produced from hydroponics.
How to Set up Hydroponic System for Goats:- There are two options for goat farmers; setting up their own hydroponic system or buy a hydroponic machine.

It is very simple to grow fodder for goats in this system. The process consists only soaking of seeds in water and nutrient solutions for few days and leaving it for seed germination under controlled environment (providing sunlight/ temperature and moisture).

Components of Hydroponic Fodder System:
Hydroponic plastic trays.
Rack or stand to keep trays.
water sprinkling setup
Greenhouse shade cloth.
Setting up a hydroponic fodder system for goats: Select the site / land where we want to setup hydroponic system based on quantity of fodder requirement. After selecting the land, it’s time for giving it a greenhouse shape and greenhouse shade cloth is needed for fencing the area. The system setup requires a rack to keep fodder trays and automated water sprinklers which can be used for sprinkling water on trays frequently maintain the temperature for faster germination of fodder seeds. Use wooden or bamboo racks to minimize the cost.

Hydroponic Fodder – Steps involved:
Day Wise Growth – Hydroponic Fodder.

Step #1 – Day 1: Select quality seeds/grains for hydroponic green fodder. You can select maize, wheat, rice, oats, barley, corn. Take 1 kg of grains / seeds and divide it in 2 parts. Half kg of seeds in each tray and clean the grains from dead seeds or broken seeds. Grains should be washed with the solution of sodium hypochlorite and leave the grains for half hour (30 minutes) in the solution. After draining the grains, soak them in fresh water for whole day (24 hours).
Step #2 – Day 2: Soaked seeds should be drained properly and left them for 5 hours in open air before placing in the tray and keep the tray in the rack for 2 days (48 hours) to allow the germination process.
Step #3 – Day 3: After seed germination, frequent water sprinkling is required for growth of the fodder and proper sunlight/temperature should be maintained. You can leave the sprouted seeds in the tray for 5 to 6 days and provide regular and frequent water sprinkling. Once the fodder reaches certain height, you can take it directly from the tray and feed the entire plant to your goats. Don’t forget to take the fodder with roots. Initially, your goats may not like the taste, combine with other fodder until the goats are habituated to hydroponic fodder.

Hydroponic Fodder – Nutrient comparison of goat fodder: – The following table compares the nutrients of conventionally grown fodder with fodder produced in hydroponics.
NutrientsConventional Green Fodder (Maize / Corn)Hydroponic Green Fodder (Maize / Corn)
Protein10.6913.59
Ether Extract2.283.53
Crude Fibre25.9714.14
Nitrogen Free Extract51.7966.78
Total Ash9.393.89
Acid Insoluble Ash1.420.35

Hydroponically Grown Grass.
Bottom Line:- Hydroponically grown grass or fodder is an excellent source of nutrients and minerals which can serve as best feed for more milk in dairy goats and healthy weight gain in commercial meat goat farming.