Wednesday, January 12, 2011

Configuring passwordless ssh access

1. Execute the following commands on local machine.

    $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

    This will create a key id_dsa.pub

2. Copy the id_dsa.pub to remote machine. Execute the following command to send the key to remote server.

    $ scp ~/.ssh/id_dsa.pub remote-machine:~/.ssh/

3. Login to the remote machine and execute the following command to authorize the key.

    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Sunday, May 02, 2010

Compile vpnc with openssl in Ubuntu

1. Compile vpnc with openssl and build package
  • Download vpnc source
sudo apt-get source vpnc

  • Uncomment some lines from Makefile. One of these two set of lines would be present in Makefile uncomment whichever is present.
OPENSSL_GPL_VIOLATION = -DOPENSSL_GPL_VIOLATION
OPENSSLLIBS = -lcrypto
or
OPENSSL_GPL_VIOLATION=yes

  • Build vpnc dependencies
sudo apt-get build-dep vpnc

  • Install openssl
sudo apt-get install openssl

  • Install libssl dev packages
sudo apt-get install libssl-dev

  • Build vpnc package
sudo dpkg-buildpackage

2. Create a vpnc conf file
#Generated by pcf2vpnc
IPSec ID GeneralHybrid
IPSec gateway <server>
IPSec secret <sercret>
Xauth username <username>
IKE Authmode hybrid
CA-File <path_to_certificate_file>.pem

#Xauth password 123456
#IKE DH Group dh2
#To add your username and password, use the following lines:
#Xauth password <your password>

3. Install network manager vpnc with openssl support
TBD

Wednesday, February 17, 2010

Creating a custom boot entry in Grub2 in Ubuntu 9.10

I created a custom boot entry for RHEL 4.5 in grub2 in Ubuntu 9.10. here are the steps

1. Open "/etc/grub.d/40_custom" file and add the following contents and save the file.

#!/bin/sh

cat << EOF
        menuentry "Redhat Enterprise Linux 4.5" {
        linux (hd0,1)/vmlinuz-2.6.9-42.ELsmp root=LABEL=/1 ro rhgb quiet pci=nommconf
        initrd (hd0,1)/initrd-2.6.9-42.ELsmp.img
}
EOF

  •    You may have to pass different parameters to the kernel. Also, In grub2 partition start from (hd0,1) whereas in grub it start from (hd0,0)
  •    Make sure that in the parameter to the kernel root=LABEL=/<something>, <something> is the label of the root partition for the given OS.

2. Run the following command to generate the grub conf file

    $ sudo grub-mkconfig -o /boot/grub/grub.cfg

3. Restart the machine and you are done!

Tuesday, February 09, 2010

Configure mod_jk with Apache 2.2 in Ubuntu

1. Install mod_jk: To install mod_jk in ubuntu execute the following command on the command line.

sudo apt-get install libapache2-mod-jk

2. Enable mod_jk loading: Create a link in /etc/apache2/mods-enabled/jk.load which points to /etc/apache2/mods-available/jk.load. This will enable loading mod_jk module in apache when apache is restarted.

3. Create mod_jk conf file:
Create a mod_jk conf file and place it in /etc/apache2/mods-available/jk.conf

# Where to find workers.properties
# Update this path to match your conf directory location
JkWorkersFile /etc/apache2/jk_workers.properties

# Where to put jk logs
# Update this path to match your logs directory location
JkLogFile /var/log/apache2/mod_jk.log

# Set the jk log level [debug/error/info]
JkLogLevel info

# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

# JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"

# Shm log file
JkShmFile /var/log/apache2/jk-runtime-status

4. Enable mod_jk configurations: Create a link in /etc/apache2/mods-enabled/jk.conf which points to /etc/apache2/mods-available/jk.conf. This will enable mod_jk configuration in apache when apache is restarted.

5. Create a worker properties file: Create a workers properties file and place it in /etc/apache2/jk_workers.properties

# Define 1 real worker named ajp13
worker.list=ajp13

# Set properties for worker named ajp13 to use ajp13 protocol,
# and run on port 8009
worker.ajp13.type=ajp13
worker.ajp13.host=localhost
worker.ajp13.port=8009
worker.ajp13.lbfactor=50
worker.ajp13.cachesize=10
worker.ajp13.cache_timeout=600
worker.ajp13.socket_keepalive=1
worker.ajp13.socket_timeout=300

6. Configure url forwarding in apache to tomcat: Put the following lines in you apache virtualhost to forward requests to tomcat.

<VirtualHost *:80>
    ...
    # Send everything for context "/context" to worker ajp13
    JkMount /context/ ajp13
    JkMount /context/* ajp13
    ...
</VirtualHost>

7. Configure AJP in tomcat server. Put the following line in $TOMCAT_HOME/conf/server.xml file under the Servies tag.

<Service name="Catalina">
     ...
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
    ...
</Service>

8. Restart the tomcat and apache server: Relax you are done.

Wednesday, December 09, 2009

HBase setup (0.20.0)

Before you begin:

Before you start configure HBase, you need to have a running Hadoop cluster, which will be the storage for hbase. Please refere to Hadoop cluster setup document before continuing.

On the HBaseMaster (master) machine:

1. In file /etc/hosts, define the ip address of the namenode machine and all the datanode machines. Make sure you define the actual ip (eg. 192.168.1.9) and not the localhost ip (eg. 127.0.0.1) for all the machines including the namenode, otherwise the datanodes will not be able to connect to namenode machine).

    192.168.1.9    hbase-masterserver
    192.168.1.8    hbase-regionserver1
    192.168.1.7    hbase-regionserver2
    192.168.1.6    hadoop-nameserver

    Note: Check to see if the namenode machine ip is being resolved to actual ip not localhost ip using "ping hbase-namenode".

2. Configure password less login from masterserver to all regionserver machines. Refer to Configuring passwordless ssh access for instructions on how to setup password less ssh access.

3. Download and unpack hbase-0.20.0.tar.gz from HBase website to some path in your computer (We'll call the hbase installation root as $HBASE_INSTALL_DIR now on).

4. Edit the file $HBASE_INSTALL_DIR/conf/hbase-env.sh and define the $JAVA_HOME.

    export JAVA_HOME=/usr/lib/jvm/java-6-sun

5. Edit the file $HBASE_INSTALL_DIR/conf/hbase-site.xml and add the following properties. (These configurations are required on all the node in the cluster)

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <configuration>
        <property>
            <name>hbase.master</name>
            <value>localhost:60000</value>
            <description>The host and port that the HBase master runs at.

                A value of 'local' runs the master and a regionserver in

                a single process.
            </description>
        </property>

        <property>

            <name>hbase.rootdir</name>
            <value>hdfs://hadoop-nameserver:9000/hbase</value>
            <description>The directory shared by region servers.</description>
        </property>

        <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
            <description>The mode the cluster will be in. Possible values are
            false: standalone and pseudo-distributed setups with managed
            Zookeeper true: fully-distributed with unmanaged Zookeeper
            Quorum (see hbase-env.sh)
            </description>
        </property>

    </configuration>
                
    Note: Remeber to replace masterserver and regionserver machine names with real machine names here.

6. Edit $HBASE_INSTALL_DIR/conf/regionservers and add the namenode machine

    hbase-regionserver1
    hbase-regionserver2
    hbase-masterserver

    Note: Add masterserver machine name only if you are running a regionserver on masterserver machine.

On HRegionServer (slave) machine:


1. In file /etc/hosts, define the ip address of the namenode machine. Make sure you define the actual ip (eg. 192.168.1.9) and not the localhost ip (eg. 127.0.0.1).

    192.168.1.9    bhase-masterserver

Note: Check to see if the masterserver machine ip is being resolved to actual ip not localhost ip using "ping hbase-masterserver".

2. Configure password less login from all regionserver machines to masterserver machines. Refer to Configuring passwordless ssh access for instructions on how to setup password less ssh access.

3. Download and unpack hbase-0.20.0.tar.gz from HBase website to some path in your computer (We'll call the hadoop installation root as $HBASE_INSTALL_DIR now on).

4. Edit the file $HBASE_INSTALL_DIR/conf/hadoop-env.sh and define the $JAVA_HOME.

    export JAVA_HOME=/usr/lib/jvm/java-6-sun

5. Edit the file $HBASE_INSTALL_DIR/conf/hbase-site.xml and add the following properties. (These configurations are required on all the node in the cluster)

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <configuration>
        <property>
            <name>hbase.master</name>
            <value>localhost:60000</value>
            <description>The host and port that the HBase master runs at.
                A value of 'local' runs the master and a regionserver in
                a single process
            </description>
        </property>

        <property>

            <name>hbase.rootdir</name>
            <value>hdfs://hadoop-nameserver:9000/hbase</value>
            <description>The directory shared by region servers.</description>
        </property>

        
<property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
            <description>The mode the cluster will be in. Possible values are
            false: standalone and pseudo-distributed setups with managed
            Zookeeper true: fully-distributed with unmanaged Zookeeper 
            Quorum (see hbase-env.sh)
            </description>
        </property>
    </configuration>

Start and Stop hbase daemons:

You need to start/stop the daemons only on the masterserver machine, it will start/stop the daemons in all regionserver machines. Execute the following command to start/stop the hbase.

    $HBASE_INSTALL_DIR/bin/start-hbase.sh
    or
    $HBASE_INSTALL_DIR/bin/stop-hbase.sh



Thursday, June 18, 2009

Secondary indexes in HBase

Creating secondary indexes in HBase-0.19.3:

You need to enable indexing in HBase before you can create a secondary index on columns. Edit the file $HBASE_INSTALL_DIR/conf/hbase-site.xml and add the following property to it.

    <property>
        <name>hbase.regionserver.class</name>
        <value>org.apache.hadoop.hbase.ipc.IndexedRegionInterface</value>
    </property>

    <property>
        <name>hbase.regionserver.impl</name>
        <value>
        org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer
        </value>
    </property>

Adding secondary index while creating table:

    HBaseConfiguration conf = new HBaseConfiguration();
    conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

    HTableDescriptor desc = new HTableDescriptor("test_table");

    desc.addFamily(new HColumnDescriptor("columnfamily1:"));
    desc.addFamily(new HColumnDescriptor("columnfamily2:"));

    desc.addIndex(new IndexSpecification("column1",
        Bytes.toBytes("columnfamily1:column1")));

    desc.addIndex(new IndexSpecification("column2",
        Bytes.toBytes("columnfamily1:column2")));


    IndexedTableAdmin admin = null;
    admin = new IndexedTableAdmin(conf);

    admin.createTable(desc);

Adding index in an existing table:

    HBaseConfiguration conf = new HBaseConfiguration();
    conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

    IndexedTableAdmin admin = null;
    admin = new IndexedTableAdmin(conf);

    admin.addIndex(Bytes.toBytes("test_table"), new IndexSpecification("column2",
    Bytes.toBytes("columnfamily1:column2")));

Deleting existing index from a table.

    HBaseConfiguration conf = new HBaseConfiguration();
    conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

    IndexedTableAdmin admin = null;
    admin = new IndexedTableAdmin(conf);

    admin.removeIndex(Bytes.toBytes("test_table"), "column2");

Reading from secondary indexed columns:

To read from a secondary index, get a scanner for the index and scan through the data.

    HBaseConfiguration conf = new HBaseConfiguration();
    conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

    IndexedTable table = new IndexedTable(conf, Bytes.toBytes("test_table"));

    // You need to specify which columns to get
    Scanner scanner = table.getIndexedScanner("column1",
        HConstants.EMPTY_START_ROW, null, null, new byte[][] {
        Bytes.toBytes("columnfamily1:column1"),
        Bytes.toBytes("columnfamily1:column2") });

    for (RowResult rowResult : scanner) {
        String value1 = new String(
            rowResult.get(Bytes.toBytes("columnfamily1:column1")).getValue());

        String value2 = new String(
            rowResult.get(Bytes.toBytes("columnfamily1:column2")).getValue());

        System.out.println(value1 + ", " + value2);
    }

    table.close();

To get a scanner to a subset of the rows specify a column filter.

    ColumnValueFilter filter =
        new ColumnValueFilter(Bytes.toBytes("columnfamily1:column1"),

        CompareOp.LESS, Bytes.toBytes("value1-10"));

    scanner = table.getIndexedScanner("column1", HConstants.EMPTY_START_ROW,
        null,
filter, new byte[][] { Bytes.toBytes("columnfamily1:column1"),
        Bytes.toBytes("columnfamily1:column2")
);

    for (RowResult rowResult : scanner) {
        String value1 = new String(
            rowResult.get(Bytes.toBytes("columnfamily1:column1")).getValue());

        String value2 = new String(
            rowResult.get(Bytes.toBytes("columnfamily1:column2")).getValue());

        System.out.println(value1 + ", " + value2);
    }

Example Code:

import java.io.IOException;
import java.util.Date;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.Scanner;
import org.apache.hadoop.hbase.client.tableindexed.IndexSpecification;
import org.apache.hadoop.hbase.client.tableindexed.IndexedTable;
import org.apache.hadoop.hbase.client.tableindexed.IndexedTableAdmin;
import org.apache.hadoop.hbase.filter.ColumnValueFilter;
import org.apache.hadoop.hbase.filter.ColumnValueFilter.CompareOp;
import org.apache.hadoop.hbase.io.BatchUpdate;
import org.apache.hadoop.hbase.io.RowResult;
import org.apache.hadoop.hbase.util.Bytes;

public class SecondaryIndexTest {
    public void writeToTable() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        IndexedTable table = new IndexedTable(conf, Bytes.toBytes("test_table"));

        String row = "test_row";
        BatchUpdate update = null;

        for (int i = 0; i < 100; i++) {
            update = new BatchUpdate(row + i);
            update.put("columnfamily1:column1", Bytes.toBytes("value1-" + i));
            update.put("columnfamily1:column2", Bytes.toBytes("value2-" + i));
            table.commit(update);
        }

        table.close();
    }

    public void readAllRowsFromSecondaryIndex() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        IndexedTable table = new IndexedTable(conf, Bytes.toBytes("test_table"));

        Scanner scanner = table.getIndexedScanner("column1",
            HConstants.EMPTY_START_ROW, null, null, new byte[][] {
            Bytes.toBytes("columnfamily1:column1"),
                Bytes.toBytes("columnfamily1:column2") });


        for (RowResult rowResult : scanner) {
            System.out.println(Bytes.toString(
                rowResult.get(Bytes.toBytes("columnfamily1:column1")).getValue())
                + ", " + Bytes.toString(rowResult.get(
                Bytes.toBytes("columnfamily1:column2")).getValue()
                ));
        }

        table.close();
    }

    public void readFilteredRowsFromSecondaryIndex() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        IndexedTable table = new IndexedTable(conf, Bytes.toBytes("test_table"));

        ColumnValueFilter filter =
            new ColumnValueFilter(Bytes.toBytes("columnfamily1:column1"),

            CompareOp.LESS, Bytes.toBytes("value1-40"));

        Scanner scanner = table.getIndexedScanner("column1",
            HConstants.EMPTY_START_ROW, null, filter,
            new byte[][] { Bytes.toBytes("columnfamily1:column1"),
                Bytes.toBytes("columnfamily1:column2")

            });

        for (RowResult rowResult : scanner) {
            System.out.println(Bytes.toString(
                rowResult.get(Bytes.toBytes("columnfamily1:column1")).getValue())
                + ", " + Bytes.toString(rowResult.get(
                Bytes.toBytes("columnfamily1:column2")).getValue()
                ));
        }

        table.close();
    }

    public void createTableWithSecondaryIndexes() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        HTableDescriptor desc = new HTableDescriptor("test_table");

        desc.addFamily(new HColumnDescriptor("columnfamily1:column1"));
        desc.addFamily(new HColumnDescriptor("columnfamily1:column2"));

        desc.addIndex(new IndexSpecification("column1",
            Bytes.toBytes("columnfamily1:column1")));
        desc.addIndex(new IndexSpecification("column2",
            Bytes.toBytes("columnfamily1:column2")));

        IndexedTableAdmin admin = null;
        admin = new IndexedTableAdmin(conf);

        if (admin.tableExists(Bytes.toBytes("test_table"))) {
            if (admin.isTableEnabled("test_table")) {
                admin.disableTable(Bytes.toBytes("test_table"));
            }

            admin.deleteTable(Bytes.toBytes("test_table"));
        }

        if (admin.tableExists(Bytes.toBytes("test_table-column1"))) {
            if (admin.isTableEnabled("test_table-column1")) {
                admin.disableTable(Bytes.toBytes("test_table-column1"));
            }

            admin.deleteTable(Bytes.toBytes("test_table-column1"));
        }

        admin.createTable(desc);
    }

    public void addSecondaryIndexToExistingTable() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        IndexedTableAdmin admin = null;
        admin = new IndexedTableAdmin(conf);

        admin.addIndex(Bytes.toBytes("test_table"),
            new IndexSpecification("column2",
            Bytes.toBytes("columnfamily1:column2")));

    }

    public void removeSecondaryIndexToExistingTable() throws IOException {
        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        IndexedTableAdmin admin = null;
        admin = new IndexedTableAdmin(conf);

        admin.removeIndex(Bytes.toBytes("test_table"), "column2");
    }

    public static void main(String[] args) throws IOException {
        SecondaryIndexTest test = new SecondaryIndexTest();

        test.createTableWithSecondaryIndexes();
        test.writeToTable();
        test.addSecondaryIndexToExistingTable();
        test.removeSecondaryIndexToExistingTable();
        test.readAllRowsFromSecondaryIndex();
        test.readFilteredRowsFromSecondaryIndex();

        System.out.println("Done!");
    }
}

Using HBase in Java (0.19.3)

Using HBase in java

Create a HBaseConfiguration object to connect to a HBase server. You need to tell configuration object that where to read the HBase configuration from. to do this add a resource to the HBaseConfiguration object.
    
    HBaseConfiguration conf = new HBaseConfiguration();
    conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

Create a HTable object to a table in HBase. HTable object connects you to a table in HBase.

    HTable table = new HTable(conf, "test_table");

Create a BatchUpdate object on a row to perform update operations (like put and delete)

    BatchUpdate batchUpdate = new BatchUpdate("test_row1");
    batchUpdate.put("columnfamily:column1", Bytes.toBytes("some value"));
    batchUpdate.delete("column1");

Commit the changes to table using HTable#commit() method.

    table.commit(batchUpdate);

To read one column value from a row use HTable#get() method.

    Cell cell = table.get("test_row1", "columnfamily1:column1");
    if (cell != null) {
        String valueStr = Bytes.toString(cell.getValue());
        System.out.println("test_row1:columnfamily1:column1 " + valueStr);
    }

To read one row with given columns, use HTable#getRow() method.
 
    RowResult singleRow = table.getRow(Bytes.toBytes("test_row1"));
    Cell cell = singleRow.get(Bytes.toBytes("columnfamily1:column1"));
    if(cell!=null) {
        System.out.println(Bytes.toString(cell.getValue()));
    }

    cell = singleRow.get(Bytes.toBytes("columnfamily1:column2"));
    if(cell!=null) {
        System.out.println(Bytes.toString(cell.getValue()));
    }

To get multiple rows use Scanner and iterate throw to get values.

    Scanner scanner = table.getScanner(
        new String[] { "columnfamily1:column1" });


    //First aproach to iterate the scanner.

    RowResult rowResult = scanner.next();
    while (rowResult != null) {
        System.out.println("Found row: " + Bytes.toString(rowResult.getRow())
            + " with value: " +
            rowResult.get(Bytes.toBytes("columnfamily1:column1")));

        rowResult = scanner.next();
    }

    // The other approach is to use a foreach loop. Scanners are iterable!
    for (RowResult result : scanner) {
        System.out.println("Found row: " + Bytes.toString(result.getRow())
            + " with value: " +
            result.get(Bytes.toBytes("columnfamily1:column1")));

    }

    scanner.close();

Example Code:

import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Scanner;
import org.apache.hadoop.hbase.io.BatchUpdate;
import org.apache.hadoop.hbase.io.Cell;
import org.apache.hadoop.hbase.io.RowResult;
import org.apache.hadoop.hbase.util.Bytes;

public class HBaseExample {

    public static void main(String args[]) throws IOException {

        HBaseConfiguration conf = new HBaseConfiguration();
        conf.addResource(new Path("/opt/hbase-0.19.3/conf/hbase-site.xml"));

        HTable table = new HTable(conf, "test_table");

        BatchUpdate batchUpdate = new BatchUpdate("test_row1");
        batchUpdate.put("columnfamily1:column1", Bytes.toBytes("some value"));
        batchUpdate.delete("column1");
        table.commit(batchUpdate);

        Cell cell = table.get("test_row1", "columnfamily1:column1");
        if (cell != null) {
            String valueStr = Bytes.toString(cell.getValue());
            System.out.println("test_row1:columnfamily1:column1 " + valueStr);
        }

        RowResult singleRow = table.getRow(Bytes.toBytes("test_row1"));
        Cell cell = singleRow.get(Bytes.toBytes("columnfamily1:column1"));
        if(cell!=null) {
            System.out.println(Bytes.toString(cell.getValue()));
        }

        cell = singleRow.get(Bytes.toBytes("columnfamily1:column2"));
        if(cell!=null) {
            System.out.println(Bytes.toString(cell.getValue()));
        }

        Scanner scanner = table.getScanner(
            new String[] { "columnfamily1:column1" });

        //First approach to iterate a scanner
        RowResult rowResult = scanner.next();
        while (rowResult != null) {
            System.out.println("Found row: " + Bytes.toString(rowResult.getRow())
                + " with value: " +
                rowResult.get(Bytes.toBytes("columnfamily1:column1")));

            rowResult = scanner.next();
        }

        // The other approach is to use a foreach loop. Scanners are iterable!
        for (RowResult result : scanner) {
            // print out the row we found and the columns we were looking for
            System.out.println("Found row: " + Bytes.toString(result.getRow())
                + " with value: " +
                result.get(Bytes.toBytes("columnfamily1:column1")));

        }

        scanner.close();
        table.close();
    }
}