Friday, May 19, 2017

Extracting a git subdirectory to a new repository

From time to time, i need to do this as git repositories are rarely left untouched during their lifetime. It appears that it quite simple exercise, but i keep forgetting the command to use. So, ultimately if i want to extract the directory let's say 'core' from the current git repository i have to run the following code:
git filter-branch --prune-empty --subdirectory-filter <directory_name>
For example:
git filter-branch --prune-empty --subdirectory-filter core

Wednesday, February 15, 2017

Persistent Storage with Kubernetes Cluster

Kubernetes is extremely useful tool when one talks about deploying complex micro-services based solutions. It helps to simplify the process from providing out of box resilience to ability to scale services horizontally at a blink of an eye. However, at some point you might question yourself how to use persist data that are used by the containers deployed to Kubernetes?
With standalone docker there are couple of strategies that could be used. Docker can map the docker managed volumes or the host directory to the running container, hence if container is destroyed the data will be kept in the above objects and hence could be mapped to the new container should it be re-created.
In theory the host directory can also be used as a persistent storage on Kubernetes eco system, but then something should be able to sync data between the directories situated on each node. This is theory and still should be confirmed.
Another approach is to use network storage such as NFS or iSCSI to share the data. This is a step by step guide to set up and use iSCSI LUNs with Kubernetes.
First, ensure that iscsi-initiator-utils package is installed on each node. In order to access the iSCSI drives the following pre-requisites must be met:
  1. The nodes are connected to the iSCSI portal preferably using the separate network interface.
  2. iSCSI target portal is configured to allow multiple connections and some form of security is in place, i.e. it authorises only connection from the range of IPs or CHAP authentication is configured.
  3. One or more LUNs are created on the target iSCSI portal.
Configure the iSCSI initiator on each node of the cluster. Then run the following command to discover the accessible LUNs (on each node):
  sudo iscsiadm -m discovery -t sendtargets -p
This will create iSCSI targets locally. Next log into the iSCSI portal by running the following command:
  sudo iscsiadm -m node --login
Or to login to the specific target:
  sudo iscsiadm -m node -T <target IP>
Then create configuration for the persistent volume (for example in the file storage/mysql.yaml)
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: <name>
  annotations:
    volume.beta.kubernetes.io/storage-class: "slow"
spec:
  capacity:
    storage: <capacity>
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  iscsi:
    targetPortal: <access IP address>
    iqn: <target name>
    lun: 0
    fsType: ext4
    readOnly: false
Create it in the Kubernetes:
  kubectl create -f storage/mysql.yaml
That is kind of it. The persistent storage is available to the Kubernetes containers. In order to use it you can create a persistent storage claim for example using the following configuration file (storage/mysql.yaml)
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: <name>
  annotations:
    volume.beta.kubernetes.io/storage-class: "slow"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <capacity>
Then create it:
  kubectl create -f storage/mysql.yaml
Bare in mind that persistent volume capacity should match the persistent volume claim capacity. (At least for now). In order to use it (for example with MySQL container) create the following file (deployment/mysql.yaml)
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mysql
        environment: production
    spec:
      containers:
        - name: mysql
          image: mysql:5.6
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-data
          persistentVolumeClaim:
            claimName: mysql
Create a deployment by running:
  kubectl create -f deployment/mysql.yaml
Your persistent volume is attached to the pod. If you kill the pod a replica set associated with the deployment will create a new pod which will be attached to the same persistent volume.
The downside of this strategy is that it only allows one pod to be connected with the read/write permission (when connected to the iSCSI persistent volume). This means that replica set can only have one pod, i.e. it is not horizontally scalable, however, Kubernetes will still make the deployment resilient as in case of failure pod will be replaced automatically.
Alternatively, you can attache the LUN directly to the pod. Here is the example of configuration (pods/mysql.yaml)
---
apiVersion: v1
kind: Pod
metadata:
  name: mysql
spec:
  containers:
  - name: mysql
    image: mysql:5.6
    volumeMounts:
    - mountPath: /var/lib/mysql
      name: mysql-data
  volumes:
  - name: mysql-data
    iscsi:
      targetPortal: <access IP address>
      iqn: <target name>
      lun: 0
      fsType: ext4
      readOnly: false
In this case LUN is attached directly to the pod and not using claims mechanism.

Saturday, January 21, 2017

Analysis of the iOS crash report

Today, i was investigating series of the nasty crashes in iOS application. The problem really was that the application crashed on device and xcode could not fully symbolicate the crash report, hence the only thing i could see there was bunch of addresses inside the application code. Something like this:
12  CoreData                   0x195164cf8 -[NSManagedObjectContext save:] + 544
13  XXX                    0x100098c5c 0x100048000 + 330844
14  XXX                    0x10009b4c4 0x100048000 + 341188
15  XXX                    0x100072b24 0x100048000 + 174884
16  XXX                    0x1000710d8 0x100048000 + 168152
17  XXX                    0x100071810 0x100048000 + 170000
After spending half an hour digging the internet through Google i came across this discussion on StackOverflow, which gave great advise as of how to decode those addresses.

Wednesday, September 21, 2016

Apache Http client gets stuck


The apache libraries are extremely popular in the development community. However, in my experience they are frequently misused due to lack of the proper documentation and meaningful examples.

The examples, provided usually only cover the basic use cases and more importantly neither source provide enough information about some aspects of the usage of such library that could have dire consequences.

I came across one of such "hidden secret" couple of weeks ago. Our customer used HttpClient class in Android application to check if the server is alive. The idea behind the code was that it sends a GET request to the non-existent server resource, server responds with the 404 error hence its alive.

Tuesday, September 20, 2016

Danger of misunderstanding

Few weeks ago i was looking into problem that one of our customers encountered where a Android application incorrectly used an Android Timer to run a heartbeat check, i.e. to check if the server side of the application is accessible and alive.

In essence, what happened is that application started generating the large number of the requests to the heartbeat resource after a very specific network outage.

The issue surfaced only under very specific set of the circumstances:
a network request is just hangs. I've managed to replicate the issue fiddling with the router and blocking the network traffic in the way that initial handshake succeeds.

This is usually a very specific scenario when lots of troubles with the application design are showing up on the surface. It could be replicated by blocking the traffic through the firewall. In this case client sends the request, does not time out on the request but never receives the response, hence the socket is kept open but no data is received fro, the server.

The problem that we eventually uncovered was that the Timer job was started with with Timer.scheduleAtFixedRate method.

I only assume that person who originally used the method misunderstood its meaning: i believe that idea was that method guarantees that method is fired with the specified interval. However, the problem is that this method is trying to fire up all the calls that were missed in case if one of the calls took too long, i.e. if it took N minutes to execute one call of the timer's method (potentially because the socket thread was hanging and no timeout occurred) and original interval was one minute as soon as timer is back to normal, i.e. hanging thread is back, it will attempt to fire up all the missing calls to the thread method at once, literally providing a great tool to create a DDoS attack against the server side.

So, remember to use in such circumstances the Timer.schedule method instead.


Sunday, March 27, 2016

A skeptic guide to use of the spring platform

In the past number of years i was known for being skeptical about overuse of the frameworks and third party libraries in the java development. I wrote before about my views onto the Hibernate in particular and ORM in general. From the other hand, i previously had quite hard time fixing the projects that were using Spring framework, hence i could not call myself a huge fan of the Spring either.

I am not going to go deep into why i believe using it everywhere could be wrong or what really could Spring bring to the average Java developer or architect in order to make their job better: this post not about that.

What happened was that recently i decided to go back to it to freshen up the knowledge of the Spring framework in the first place and also to see what has changed since i used it last time. It is a well known fact that knowing certain set of libraries and frameworks is a must when talking about finding the job in the java world.

So, i am going to post observations on the way, so at least i won't need to go through all the misery of resolving various issues which will arise (i am sure there will be plenty of them). So sit tight...

Tuesday, November 3, 2015

Create a bootable El Capitan image

There are numerous resources that are explaining how to create a bootable ISO image to install El Capitan on Virtual Box. However, it appears that they are missing one step that for me resulted in error during the installation. Here is the full list of commands. First, download the El Capitan Installation App and do the following:

hdiutil attach /Applications/Install\ OS\ X\ El\ Capitan.app/Contents/SharedSupport/InstallESD.dmg -noverify -nobrowse -mountpoint /Volumes/install_app
hdiutil create -o /tmp/ElCapitan -size 8192m -layout SPUD -fs HFS+J -type SPARSE
hdiutil attach /tmp/ElCapitan.sparseimage -noverify -nobrowse -mountpoint /Volumes/install_build
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/
cp /Volumes/install_app/BaseSystem.* /Volumes/OS\ X\ Base\ System/
hdiutil detach /Volumes/OS\ X\ Base\ System/
hdiutil detach /Volumes/install_app/
hdiutil resize -size `hdiutil resize -limits /tmp/ElCapitan.sparseimage | tail -n 1 | awk '{ print $1 }'`b /tmp/ElCapitan.sparseimage
hdiutil convert /tmp/ElCapitan.sparseimage -format UDTO -o /tmp/ElCapitan
mv /tmp/ElCapitan.cdr ~/Downloads/ElCapitan.iso
rm /tmp/ElCapitan.sparseimage