Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

Monday, November 6, 2017

Integrating Watson Alchemy API with IBM Data Science Experience

This blog describes - Integrating the "Watson Alchemy API" service in IBM Bluemix with RStudio in IBM Data Science Experience.

Here, we use different NLP API's provided by Alchemy API from R Studio in IBM Data Science Experience. We use the rJava library to get sentiment, entities, relation etc on unstructured data.


1) Setup the "Watson Alchemy API" service in IBM Bluemix


Login to https://console.ng.bluemix.net/ and create an account then create a Alchemy Service.

Get the apikey from service credentials as below.

















2) Build a Java Application

 
Download Watson Developer Cloud Java SDK

Create a Java Project with the below code

package com.bluemix;
import java.util.HashMap;
import java.util.Map;

import com.ibm.watson.developer_cloud.alchemy.v1.AlchemyLanguage;
import com.ibm.watson.developer_cloud.alchemy.v1.model.DocumentSentiment;
import com.ibm.watson.developer_cloud.alchemy.v1.model.Entities;
import com.ibm.watson.developer_cloud.alchemy.v1.model.TypedRelations;

public class BlueMix_Alchemy_API {
       String Document;
      
       public static void main(String[] args) {}

       public String getSentiment(String inputStr, String ApiKey) {
             AlchemyLanguage service = new AlchemyLanguage();
           service.setApiKey(ApiKey);
        Map<String, Object> params = new HashMap<String, Object>();
           params.put(AlchemyLanguage.TEXT,inputStr);
           DocumentSentiment sentiment = service.getSentiment(params).execute();
           return sentiment.toString();
       }
      
       public String getTypedRelations(String inputStr, String ApiKey) {
             AlchemyLanguage service = new AlchemyLanguage();
           service.setApiKey(ApiKey);
        Map<String, Object> params = new HashMap<String, Object>();
           params.put(AlchemyLanguage.TEXT,inputStr);
        TypedRelations relations = service.getTypedRelations(params).execute();
           return relations.toString();

       }
      
       public String getEntities(String inputStr, String ApiKey) {
             AlchemyLanguage service = new AlchemyLanguage();
           service.setApiKey(ApiKey);
           Map<String, Object> params = new HashMap<String, Object>();
           params.put(AlchemyLanguage.TEXT,inputStr);
           Entities entities = service.getEntities(params).execute();
           return entities.toString();
       }
}

 

























Generate the java jar for the java code.

3) Setup the "IBM Data Science Experience"

 
Login to http://datascience.ibm.com and create an account.



 Open R Studio and install the R Library - rJava

Create a folder and set it as working directory. Upload the java jar & java-sdk-3.3.1-jar-with-dependencies.jar to the working directory.

R Commands:

> library(rJava)
>
> cp = c("/home/rstudio/WatsonAlchemySentimentAnalysis/WatsonAlchemySentimentAnalysis.jar",
        "/home/rstudio/WatsonAlchemySentimentAnalysis/java-sdk-3.3.1-jar-with-dependencies.jar")

>
> .jinit(classpath=cp)
>
> instance = .jnew("com.bluemix.BlueMix_Alchemy_API")
>
> sentiment <- .jcall(instance, "S", "getSentiment", "IBM Watson won the Jeopardy television show hosted by Alex Trebek","<provideTheAPIKeyFromBlumix>")
>
> cat(sentiment)
>
> entities <- .jcall(instance, "S", "getEntities", "IBM Watson won the Jeopardy television show hosted by Alex Trebek","<provideTheAPIKeyFromBlumix>")
>
> cat(entities)
>
> relation <- .jcall(instance, "S", "getTypedRelations", "IBM Watson won the Jeopardy television show hosted by Alex Trebek","<provideTheAPIKeyFromBlumix>")
>
> cat(relation)

Link to GitHub code





















































Further Reading: 

Integrating the "IBM BigInsights for Apache Hadoop" service in IBM Bluemix with RStudio in IBM Data Science Experience - Part 1 


Integrating the "IBM BigInsights for Apache Hadoop" service in IBM Bluemix with RStudio in IBM Data Science Experience - Part 2



Wednesday, October 5, 2016

Java JDBC Application to connect to IBM BigSQL or IBM DB2 in a Kerberos Cluster

This blog provides the complete code to connect to the IBM BigSQL Database in a Kerberos Cluster. The same code works for IBM DB2 also.

In the below code, we do not pass the database User/Passowd to connect to Database. Here, we use the Kerberos Authentication to connect to Database.

1) Java - JDBC Code


public class KerberosTest {

       public static void main(String[] args) {
            
             if (null != args && args.length != 2) {
                    throw new IllegalArgumentException(
                                 "Invalid arguments. " +
                                 "Specify headnode hostname/ip " +
                                 "& database port");
             }

             String ServerName = args[0];
             int PortNumber = Integer.parseInt(args[1]);
             String DatabaseName = "bigsql";

             java.util.Properties properties = new java.util.Properties();
            
             // 11 is the integer value for kerberos
             properties.put("securityMechanism", new String("11"));
            
             // Provide the Kerberos Principal
             properties.put("KerberosServerPrincipal",
                           "biadmin/btestserver.ibm.com@IBM.COM");

             String url = String.format("jdbc:db2://%s:%d/%s", ServerName,
                           PortNumber, DatabaseName);

             java.sql.Connection con = null;
             try {
                    Class.forName("com.ibm.db2.jcc.DB2Driver").newInstance();
             } catch (Exception e) {
                    System.out.println("Error: " +
                                 "failed to load Db2 jcc driver.");
             }

             try {
                   
                    System.out.println("url: " + url);
                   
                    con = java.sql.DriverManager.
                                 getConnection(url, properties);
                   
                    java.sql.Statement s2 = con.createStatement();

                    try {
                           s2.executeUpdate("drop table t1");
                           s2.executeUpdate("drop table tbint");

                           System.out.println("Drop Hadoop & DB2 " +
                                        "tables successfull!!!");
                          
                    } catch (Exception e) {
                           System.out.println("drop is failing");
                    }

                    try {
                           // Create DB2 Table
                           s2.executeUpdate("create table t1 (c1 int)");
                          
                           // Create BigSQL Table
                           s2.executeUpdate(
                                        "create hadoop table " +
                                        "if not exists "+
                                        "tbint " +
                                        "(col1 INT, col2 INT, col3 INT)");

                           System.out.println(
                                        "Created Hadoop & DB2 tables " +
                                        "successfully!!!");
                          
                    } catch (Exception e) {
                           System.out.println("create is failing");
                    }

                    // Insert to DB2 Table
                    String str = "insert into t1 values (100)";
                    s2.executeUpdate(str);

                    // Query the DB2 Table
                    java.sql.PreparedStatement ps = con
                                 .prepareStatement("select * from t1");
                    java.sql.ResultSet rs = ps.executeQuery();

                    while (rs.next()) {
                           System.out.println(rs.getString(1));
                    }

                    // Insert to BIGSQL Table
                    str = "insert into tbint values(1,2,3),(1,2,3),(1,2,3)";
                    s2.executeUpdate(str);

                    // Query the BIGSQL Table
                    ps = con.prepareStatement("select * from tbint");
                    rs = ps.executeQuery();

                    while (rs.next()) {
                           System.out.printf("%s,%s,%s", rs.getString(1),
                                        rs.getString(2), rs.getString(3));
                           System.out.println();
                    }

                    con.close();
             } catch (Exception e) {
                    e.printStackTrace();
             }
       }
}



To complie the above code, ensure the DB2 jars - db2jcc_license_cu.jar, db2jcc.jar & db2jcc4.jar in the classpath.

2) Testing the JDBC Application in Kerberos Cluster


Do the Kinit and get the valid ticket.

[biadmin@btestserver ~]$
[biadmin@btestserver ~]$ date
Wed Oct  5 16:50:18 PDT 2016
[biadmin@btestserver ~]$
[biadmin@btestserver ~]$
[biadmin@btestserver ~]$ /opt/ibm/biginsights/jdk/jre/bin/kinit -k -t /opt/ibm/biginsights/conf/security/keytabs/biadmin.keytab biadmin/btestserver.ibm.com
Done!
New ticket is stored in cache file /home/biadmin/krb5cc_biadmin
[biadmin@btestserver ~]$
[biadmin@btestserver ~]$
[biadmin@btestserver ~]$ klist -c krb5cc_biadmin
Ticket cache: FILE:krb5cc_biadmin
Default principal: biadmin/btestserver.ibm.com@IBM.COM

Valid starting     Expires            Service principal
10/05/16 16:50:22  10/06/16 16:50:22  krbtgt/IBM.COM@IBM.COM
[biadmin@btestserver ~]$
[biadmin@btestserver ~]$


Run the Java code.

[biadmin@btestserver ~]$
[biadmin@btestserver ~]$ java -jar /opt/nisan/BigSQL_jdbc_kerberos.jar "btestserver.ibm.com" 51000
url: jdbc:db2://btestserver.ibm.com:51000/bigsql
Drop Hadoop & DB2 tables successfull!!!
Created Hadoop & DB2 tables successfully!!!
100
1,2,3
1,2,3
1,2,3
[biadmin@btestserver ~]$


3) Verifying the records inserted to the table


[biadmin@btestserver ~]$
[biadmin@btestserver ~]$ su bigsql
Password:
[bigsql@btestserver biadmin]$
[bigsql@btestserver biadmin]$ db2 "connect to bigsql"

   Database Connection Information

 Database server        = DB2/LINUXX8664 10.6.2
 SQL authorization ID   = BIGSQL
 Local database alias   = BIGSQL

[bigsql@btestserver biadmin]$
[bigsql@btestserver biadmin]$ db2 "select * from biadmin.t1"
C1
-----------
        100

  1 record(s) selected.

[bigsql@btestserver biadmin]$
[bigsql@btestserver biadmin]$ db2 "select * from biadmin.tbint"
COL1        COL2        COL3
----------- ----------- -----------
          1           2           3
          1           2           3
          1           2           3

  3 record(s) selected.

[bigsql@btestserver biadmin]$

I will be covering the JDBC Application to connect to Apache Hive in my next blog.