Thursday 30 December 2021

POSTMAN PRE-REQUEST-SCRIPT

 POSTMAN PRE-REQUEST-SCRIPT


POSTMAN provides many features for clubbing ,testing and running API through it. There are many features which can be handy while testing or developing REST APIs.

Pre-Request-Script feature is one the core feature which can be used for setting Token for subsequent request. We can make use of environment variable and set same in header in a collection.

Code snippet for same is ::

CASE 1: When request body contains raw data type.

pm.sendRequest({
url: 'http://localhost:8080/authenticate',
method: 'POST',
header: 'Content-Type:Application/json',
body: {
mode: 'raw',
raw: JSON.stringify({
"username":"admin",
"password":"password"
})
}
}, function (err, res) {
console.log(res.json());
pm.environment.set("authorization", "Bearer " + res.json().token);
});


Now in any request we can set header by setting key and value as "Authorization"  {{authorization}} 


CASE 2: When request body contains formdata type.


const reqObj = {

'method': 'POST',
'url': 'http://localhost:8080/authenticate/token',
'body': {
'mode': 'formdata',
'formdata': [
{'key':'grant_type', 'value':'password'},
{'key':'client_id', 'value':'2'},
{'key':'scope', 'value':'*'},
{'key':'client_secret', 'value':'Tjkjkjk'},
{'key':'username', 'value':'anoop'},
{'key':'password', 'value':'password'}
]
}
};
pm.sendRequest(reqObj, (error, response) => {
if (error) throw new Error(error);
console.log(response.json().access_token);
pm.environment.set("TOKEN",response.json().access_token);
});

Similar to the above request , in this also we can set value dynamic with {{TOKEN}}


CASE 3: Can also add below code under Test tab and set Collection Variable as X-Auth-Token useful for all the request under collection.


var res = pm.response.json();
pm.environment.set('X-Auth-Token', res.access.token.id);
pm.collectionVariables.set('X-Auth-Token',res.access.token.id);

console.log(res.access.token.id);




Monday 27 December 2021

kubernetes commands

Kubernetes command

1)minikube start --vm-driver=hyperkit  : Will start one node cluster on virtual machine.

2)minikube status : toc check the status of the cluster.

Note: Minikube is just for starting and deleting the cluster specific stuff whereas kubectl is another client which is basically used to interact with cluster and used to perform different task like create deployment etc.

3)kubectl get pod: to list the pods created.

4)kubectl get services: list all the created servces.

5)kubectl get nodes: list all the nodes in clsuter.

6)kubectl create deployment deploymnet1 --image=nginx : create deployment for nginx image wherein the image will be pulled from docker hub.

7)kubectl get deployment :: will list the deployment details.

8)kubectl get replicaset:: will list how many replica set are created for each deployment .

9)kubectl edit deployment :: to edit deployment file.

10)kubectl logs pod-name :

11)kubectl describe pod pod-name:

12)

Thursday 16 December 2021

Containerization Terminologies

Containerization Terminologies 

Kubernetes

Kubernetes is a container orchestration framework built by Google.

k8s

abbreviation for kubernetes ("ubernete" is 8 letters)

Openshift

Openshift is an application Platform-as-a-Service built by Red Hat to extend kubernetes. 

Origin

Openshift  Origin is the open source base for Openshift  installations. Red Hat sells both a cloud (Openshift Online) and on-prem (Openshift Container Platform) offering. Because Origin is open source, it can be installed an managed freely.

Object

Objects

In this context, an object  refers to a k8s object. The definition for an Openshift Object is the same; Openshift extends the k8s API with additional objects.

S2I

Source-to-Image

S2I is a way of building docker images. Though built for the Openshift project, it can be run fully independently, though naturally Openshift has special support for it. The chief benefit of S2I is not having to define a Dockerfile for each application, which implicitly means that there is a standard build methodology per programming language.

There is a default set of S2I base images maintained in the Software Collections GitHub organization.

Helm

Helm is a CLI tool for working with K8s  or Openshift Objects (usually yaml files). It is extensively documented.

Tiller

Tiller is a GRPC server component of the Helm project. The Helm CLI sends requests to Tiller for almost all of its operations, such as running an installation.

Chart

Chart is a "package" in the Helm world. It contains one or more templated k8s objects. Charts are versioned and published to a Chart Repository.

Chart Repository

Generally, a Chart Repository is a static file server with tarballs of Charts. We can use github pages as its chart repository, and any charts committed to master on your GitHub repo can be automatically published.

Motor

Motor is a RESTful interface for Tiller. In addition to translating rest calls to GRPC, Motor also handles RBAC for installs and release data.

Open Shift CLI Commands

Openshift CLI commands 

1)oc login clusterURL : used to connect to cluster.

2)oc project projectName : used to select project that need to be used.

3)oc get pods : to get pod details.

4)oc get dc : to get deployment configs detail.

5)oc get rc : to get replication controller details.

6)oc get services : to get services details.

7)oc get routes : to get route details.

8)oc scale --replicas=2 podname : to scale replicas to 2 and this will ensure that always there are 2 instances running with auto healing capability which means in case any instance goes down then it will try to bring instance up automatically.

9)oc policy  : to provide access to. different users and groups.

10)oc rsh podname : take remote session of POD for debugging and inspecting.Though pod or a container is immutable. but we can take remote session and check in case there is any issue in POD .



Monday 6 December 2021

AWS S3 (Simple Storage Service)

 AWS S3 (SIMPLE STORAGE SERVICE

What is S3? 

S3 stands for Simple Storage Service and essentially it's object storage in the cloud. It provides secure, durable, and highly scalable object storage. It allows you to store and retrieve any amount of data from anywhere on the web at a very low cost. So it's extremely scalable.And the other cool thing about S3 is it's really, really simple to use.

S3 is object-based storage, and basically it manages data as objects rather than in file systems or data blocks.So you can upload any file type that you can think of to S3 like like image files, text files,videos, web pages etc. But one important point is that you can only store static content on S3 which means you cannot store any dynamic content for now like installing an OS etc.

The total volume of data and the number of objects you can store is unlimited in S3 and the maximum object size can be upto 5 TB as of today.

Where do we store our file in S3?

We store our files in a thing called a bucket. S3 bucket is basically a folder inside S3. Important thing to know is that  S3 bucket name is universal namespace and all AWS accounts share S3 bucket name globally which means each bucket name shall be globally unique on every creation.

What happens when bucket is created?

let' say you created bucket by the name of  s3bucket , you will see that the new URL  will be generated like below

https:// s3bucket.s3.us-east-1.amazonaws.com/file1.txt

So syntax would be like 

https://{name of bucket}.s3.{region where bucket belongs to}.amazonaws.com/{key name means object which will get persisted like file etc}

We can also have versioning with S3 bucket which could be good option for daily monitoring and tracking. 

Some of the objects associated with S3 are 

      i) key

      ii)Value

     iii)Version Id

     iv)metadata

When bucket is created by default bucket is private and in case you want to make objects public then make sure bucket need to be made public first .


General Response Codes

when you upload a file to S3, if that upload is successful, your browser will receive a HTTP 200 code. There shall be many other response code which you can discover from the AWS doc or I will also try to update those later.

How safe and durable is data stored in S3?

So just remember S3 is a safe place to store your files.The data is always spread across multiple devices and facilities in order to ensure availability and durability.So it's not just in a single data center in a single server.It's always spread across multiple devices and multiple facilities.It can be across multiple Availability Zones and this all is done to ensure availability and durability.


How data can be secured in S3?

Data can be secured by enabling 3 ways.

1) Enabling Server side encryption : Whenever new objects gets created all the new objects shall be encrypted by default.

2) Enabling Access control List (ACLs) : Specify what accounts and groups are allowed to access specific objects in a bucket.

3) Enabling Bucket Policies : What operations are allowed in a bucket. Like PUT is allowed by DELETE is not allowed.


Difference between ACLs and Bucket policies:

       ACLs are applied at individual object level in or inside bucket whereas bucket policies are applied at         bucket level.

 

Data consistency model in S3? 

      S3 supports strongly READ-after-Write consistency model in which after every update  new read will        show all new changes immediately.

      Also strong consistency for list operation wherein we can see list showing all the latest write updates.

 

Thursday 29 July 2021

Hack Jenkins credentials using Script Console.

Jenkins being one of the most popular CI/CD tool across the globe but there is still way to figure out credentials stored in it.  

It has a lot of incredible features that make life easier. One of the most powerful feature, is Script Console. Jenkins offers a console that can execute Groovy scripts to do anything within the Jenkins master run-time or in the run-time on agents.

This console can be used in configuring Jenkins and debugging Jenkins run-time issues, misusing this console with leak of security may cause a lot of harm to you. You may lose the server or even get your infrastructure hacked.

Access Script Console:

Go to “Manage Jenkins” then click “Script Console”. Or you can easily go to this URL, “Jenkins_URL/script”.

Type any groovy code in the box then click Run. It will be executed on the server.

Examples of what can be done:

You can disable all jobs, delete the work-space for all disabled jobs to save space, decrypt credentials configured within Jenkins and decrypt any password stored in Jenkins for instance, “user password” if you have its hash code.

To disable all jobs:

To disable all jobs in your Jenkins at once, go to the magic console and execute the next peace of code.

import hudson.model.*


disableChildren(Hudson.instance.items)

def disableChildren(items) {
for (item in items) {
if (item.class.canonicalName == 'com.cloudbees.hudson.plugins.folder.Folder') {
disableChildren(((com.cloudbees.hudson.plugins.folder.Folder) item).getItems())
} else {
item.disabled=true
item.save()
println(item.name)
}
}
}

You can even delete their work-space after being disabled:

//michaeldkfowler
import jenkins.model.*
Jenkins.instance.getAllItems(AbstractProject.class)
.findAll {it.disabled}
.each {
println("Wiping workspace for "+it.fullName)
it.doDoWipeOutWorkspace()
}

Decrypt credentials defined in Jenkins and list values.

Credentials are very critical and it’s important to save them in some place that no one can get them easily. Saving them in Jenkins is not the best way to do so.

The next set of code can easily print out all credentials stored in Jenkins server of type Private-Key then of type Username and Password with their VALUES!.

def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class,
Jenkins.instance,
null,
null
);
for (c in creds) {
println( ( c.properties.privateKeySource ? "ID: " + c.id + ", UserName: " + c.username + ", Private Key: " + c.getPrivateKey() : ""))
}
for (c in creds) {
println( ( c.properties.password ? "ID: " + c.id + ", UserName: " + c.username + ", Password: " + c.password : ""))
}

Decrypt the password if you got its HASH.

Accounts’ password and proxy user password are hashed and saved some where on the server. Assuming that you got that hash, then you can decrypt it using the next code.

println(hudson.util.Secret.decrypt("{HASHxxxxx=}"))# or /////println(hudson.util.Secret.fromString("{HASHxxxxx=}").getPlainText())

Of course, getting the hash is difficult but here is an example of how easy it might be.

On a server, a proxy is defined and username/password are saved to use this proxy. If you inspected the password field you will get the hashed value of the password from HTML!

Jenkins saves passwords value and return the hash to the browser to use it later.

Running shell commands:

As a debugging tool, the console gives you the ability to run any shell command. i.e. to execute the commend “ls” on the server:

println new ProcessBuilder('sh','-c','ls').redirectErrorStream(true).start().text

You can run any shell commands or scripts on the server by creating a job and execute you commands. But you can use the console as a hidden place to do what ever you want to do without letting others know about it.



MAC shortcut for daily use.

Some of the basic shortcuts I cam across while working with MAC.


1) Terminal:

  • Copy and paste:
command + c || command + v
  • Break a process:
control + c
  • Go to the beginning of a line inside the terminal:
fn + shift + left arrow
  • Go to the end of a line inside the terminal:
fn + shift + right arrow
  • New window:
command + N
  • New tab:
command + T
  • Close a tab:
command + W
  • Go to tab number 1 || 2 .. etc:
command 1 || command 2 ... etc
  • Search in history commands:
control + R
  • Exit current shell:
control + D
  • Delete text where the cursor is:
fn + delete

2) Chrome:

  • Open the Developer Tools Chrome:
command + option + I || command + option + J || command + Shift + C.
  • New incognito window:
shift + command + N
  • open last closed tab:
command + shift + T
  • Open downloads:
option + command + L

3) General:

  • Navigate through more than one window for the same application:
command + `
  • Open new tab:
command + tab
  • Close current tab:
command + w
  • Home button (go to the beginning of a page):
fn + left arrow
  • End button (go to the end of a page)
fn + right arrow
  • Go to the beginning of a line:
command + left arrow
  • Go to the end of a line:
command + right arrow
  • Get next opened tab:
control + tab
  • Get the previously opened tab:
control + shift + tab
  • Open new tab:
command + t
  • Delete text in front of the cursor:
fn + delete

Wednesday 31 March 2021

Daily helper in Linux or Mac

1)Search recursively some content inside directory.
   grep  'content_to_find'  -R  /home/anoop

2)Grep some content from a source.  -A will show number of lines which we want to show after word is found.
    cat nginx.conf | grep -A 10 word_to_search

3)Check whether port is in listen state or not.
    netstat -an | grep 443

4) Find command is used to find file or directory  inside any directory.  
      Syntax::  find  location  -iname dir_name or file_name  
      find  / -iname "search.log" 

5)Add opt folder in finder in mac. Click on Command +shift+G and add the requred folder on favourite folder by dragging it.

Mac has default Package management system that can be used to check  and change different service status.

1) brew services
2) brew services info mongodb-community
3)brew services stop mongodb-community
4)brew services start mongodb-community