This blog describes on Configuring and running the Phoenix from IBM BigInsights.
Apache Phoenix is an open source that provides SQL on HBase. Refer : https://phoenix.apache.org/
Environment: BigInsights 4.2
Step 1: Configure Phoenix from Ambari
Login to Ambari UI, then go to HBase Configuration and enable the phoenix.
Save the changes and restart the HBase.
2) Validating the Phoenix
Login to Linux terminal as hbase user and run the below command. It will create the tables and do some select queries. You can see the output in the console.
cd /usr/iop/current/phoenix-client/bin
./psql.py localhost:2181:/hbase-unsecure ../doc/examples/WEB_STAT.sql ../doc/examples/WEB_STAT.csv ../doc/examples/WEB_STAT_QUERIES.sql
3) Running Queries using Phoenix
This section focus on running some queries on Phoenix. Here I am focusing on some basic operations.
Open the Terminal and run the below commands
cd /usr/iop/current/phoenix-client/bin
./sqlline.py testiop.in.com:2181:/hbase-unsecure
Create the table then insert some rows and do a select on the table.
CREATE TABLE IF NOT EXISTS CUSTOMER_ORDER (
ID BIGINT NOT NULL,
CUSTOMID INTEGER,
ADDRESS VARCHAR,
ORDERID INTEGER,
ITEM VARCHAR,
COST INTEGER
CONSTRAINT PK PRIMARY KEY (ID)
);
upsert into CUSTOMER_ORDER values (1,234,'11,5-7,westmead',99,'iphone7',1200);
upsert into CUSTOMER_ORDER values (2,288,'12,5-7,westmead',99,'iphone6',1000);
upsert into CUSTOMER_ORDER values (3,299,'13,5-7,westmead',99,'iphone5',600);
select * from CUSTOMER_ORDER;
If you like to know about other SQL Query syntax, refer https://phoenix.apache.org/language/
4) Bulk Loading the data to the table
Here, we are doing a bulk load to the above table.
Upload the data to HDFS
[root@test bin]#
[root@test bin]# hadoop fs -cat /tmp/importData.csv
11,234,'11,5-7,westmead',99,'iphone7',1200
12,288,'11,5-7,westmead',99,'iphone7',1200
13,299,'11,5-7,westmead',99,'iphone7',1200
14,234,'11,5-7,westmead',99,'iphone7',1200
[root@test bin]#
Run the import command from the terminal
sudo -u hbase hadoop jar ../phoenix-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table CUSTOMER_ORDER --input /tmp/importData.csv
Thus, we are able to configure and perform some basic Queries on Phoenix.
No comments:
Post a Comment