• SYEDHUSSIM.COM
  • Java
  • Node.js
  • Rust
  • HANA DB
  • SAP Business One
  • php-sapb1-v2 Library Documentation v2 07 November 2024 - 581 views
    A simple and easy to use PHP library for SAP Business One Service Layer API.

    Requirements


    PHP 7.2+
    json
    libxml

    Installation


    Download or clone the php-sapb1-v2 repository from https://github.com/syedhussim/php-sapb1-v2.

    Usage


    The following video explains how to use the library.





  • JSON_TABLE Function 31 March 2023 - 3327 views
    SAP HANA has several functions to help query JSON data and in this article, we'll take a look at how to use the JSON_TABLE function to query JSON data stored in a regular column.

    Let's start by creating an employee table with a single NVARCHAR column that will contain JSON data for a single employee and populate it with some dummy data.

    Listing 1

    CREATE ROW TABLE EMPLOYEE (DATA NVARCHAR(3000));
    

    Listing 2

    INSERT INTO EMPLOYEE (DATA) VALUES('{ 
        "Name": "John",
        "Salary": 60000,
        "Address": {
            "Street": "100 Riverdale Ave",
            "City": "San Diego",
            "State": "CA"
    
        }
    }');
    
    INSERT INTO EMPLOYEE (DATA) VALUES('{ 
        "Name": "Suzan",
        "Salary": 70000,
        "Address": {
            "Street": "200 Sunnydale Ave",
            "City": "Miami",
            "State": "FL"
    
        }
    }');
    
    INSERT INTO EMPLOYEE (DATA) VALUES('{ 
        "Name": "Ali",
        "Salary": 56000,
        "Address": {
            "Street": "300 Bushey Mill Lane",
            "City": "San Antonio",
            "State": "TX"
    
        }
    }');
    

    JSON_TABLE Function


    Now that we have a few employee records, let's take a look at how we can query the data. The JSON_TABLE function creates a relational table from a JSON object. To do this, the function needs several arguments. First, the function needs to know about the JSON data. This data can come from a string literal or a table column. The second argument is a SQL/JSON expression that indicates which JSON elements to use when producing the output. The third argument is a column definition that essentially maps object properties to column names.

    Let's take a look at a few examples. Remember that the JSON_TABLE function creates a relational table, this means you can treat it like a regular table.

    Listing 3

    SELECT * FROM 
    
    JSON_TABLE(EMPLOYEE.DATA , '$'
        COLUMNS(
            NAME VARCHAR(20) PATH '$.Name'
        )
    )
    

    In the code sample above the JSON_TABLEfunction is provided with the JSON data that is stored in the DATA column of the EMPLOYEE table. We provide it with the $ JSON expression to indicate we want to use the whole object as the source and finally we use the columns definition to specify that we want the Name property of the JSON data to be mapped to a NAME column of type VARCHAR(20).

    Executing the query above will produce three rows of each employee's name. We can add the employee's salary to the result by modifying the columns definition as show below.

    Listing 4

    SELECT * FROM 
    
    JSON_TABLE(WEB_WMS.EMPLOYEE.DATA , '$'
        COLUMNS(
            NAME VARCHAR(20) PATH '$.Name',
            SALARY DOUBLE PATH '$.Salary'
        )
    )
    
  • Connect To HANA DB Using Rust 21 February 2023 - 3123 views
    The hdbconnect package makes it easy to connect to a HANA database using Rust and in this article, we'll take a look at how to establish a connection and perform some queries.

    Connecting To HANA


    Let's start by creating a new binary project and then configuring Cargo.toml to include the hdbconnect dependency.

    cargo new hana_sample
    [dependencies]
    hdbconnect = "0.25"

    Now that hdbconnect has been added as a dependency, we can connect to HANA using the Connection type as shown below in listing 1.

    Listing 1

    use hdbconnect::Connection;
    
    fn main() {
    
        let user = "USER_NAME";
        let password = "PASSWORD";
        let host = "HOST:PORT";
    
        let mut connection = Connection::new(format!("hdbsql://{user}:{password}@{host}")).unwrap();
    }
    

    Executing Queries


    The Connection type has several methods for executing queries, one of which is the query() method. This method takes an SQL statement as an str and returns a Result. If the Result is successful the Ok variant will contain a ResultSet. You can then use the ResultSet's try_into() method to translate the returned data into Rust types that implement serde::Deserialize.

    Let's take a look at a few examples. The query in listing 2 below returns a single row with one value, which is deserialized into a String.

    Listing 2

    ...
    
    let result : String = connection.query("SELECT NOW() FROM DUMMY").unwrap().try_into().unwrap();
    

    Queries that return a single row with multiple columns can be deserialized into a tuple.

    Listing 3

    let result : (String, String) = connection.query("SELECT CURRENT_DATE, CURRENT_TIME FROM DUMMY").unwrap().try_into().unwrap();
    
    println!("{} {}", result.0, result.1);
    

    In the code samples above, date/time values are deserialized into Strings. You can however use the HanaDate/HanaTime types that provide convenient methods to work with dates and times.

    Listing 4

    use hdbconnect::time::{ HanaDate, HanaTime };
    ...
    let result : (HanaDate, HanaTime) = connection.query("SELECT CURRENT_DATE, CURRENT_TIME FROM DUMMY").unwrap().try_into().unwrap();
    
    println!("Day : {}, Hour : {}", result.0.day(), result.1.hour());
    

    Queries that return multiple rows can be deserialized into a vector that can then be used to iterate over the rows.

    Listing 5

    let result : Vec<(String, String)> = connection.query("SELECT SCHEMA_NAME, TABLE_NAME FROM SYS.TABLES LIMIT 5").unwrap().try_into().unwrap();
    
    for row in result{
        println!("{} - {}", row.0, row.1);
    }
    

    Alternatively, as the ResultSet type implements the Iterator trait, we can rewrite the code above as the following.

    Listing 6

    for result in connection.query("SELECT SCHEMA_NAME, TABLE_NAME FROM SYS.TABLES LIMIT 5").unwrap(){
        let data : (String, String) = result.unwrap().try_into().unwrap();
    
        println!("{} - {}", data.0, data.1);
    }
    

    Queries that return null values need to be handled correctly as Rust does not support NULL. For example, the code below will generate an error causing the program to crash with the following error thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Deserialization { source: ValueType("The value cannot be converted into type String") }'

    Listing 7

    let result : String = connection.query("SELECT NULL FROM DUMMY").unwrap().try_into().unwrap();
    

    We can fix this error either by returning a default value using the IFNULL function in the SQL query or deserializing the value into an Option enum type as shown below.

    Listing 8

    // The query returns an empty string if the column value is null
    let result : String = connection.query("SELECT IFNULL(NULL, '') FROM DUMMY").unwrap().try_into().unwrap();
    
    // Deserialize the value into an Option enum type
    let result : Option<String> = connection.query("SELECT NULL FROM DUMMY").unwrap().try_into().unwrap();
    


  • Count File Types In Directory 29 January 2023 - 3093 views
    In this last article on developing a command line program, we'll develop a program that counts the occurrence of different file types in a directory and its sub-directories. The program will accept a directory as an argument and recursively scan all sub-directories. We'll use a HashMap to keep track of the count for each file type.

    Let's begin by setting up the main function to collect command line arguments and a HashMap to store the running count for each file type.

    Listing 1

    use std::collections::HashMap;
    
    fn main() {
    
        let args : Vec<String> = env::args().collect();
    
        let mut map : HashMap<String, usize> = HashMap::new();
    }
    

    Rust provides a few handy functions to deal with file system-related tasks, one of those functions is the read_dir() function which returns an iterator over the entries in a directory. The read_dir() function only provides an iterator for the specified directory, so we'll need to write a function that can recursively iterate over all sub-directories.

    To do this, we'll need to know if an entry returned by the iterator is a directory or a file. If it's a directory, we can call our function again with the new directory path.

    Listing 2

    fn read_directory(path : &str){
    
        let dir = fs::read_dir(path).unwrap();
    
        for result in dir{
    
            let entry = &result.unwrap();
            let metadata = entry.metadata();
            
            if metadata.unwrap().is_dir(){
            
                read_directory(entry.path().to_str().unwrap());
            }else{
                // Todo
            }
        }
    }
    

    The read_directory() function in listing 2 above accepts a directory path as an &str, which is then used to read the directory using the read_dir() function from the fs module. The read_dir() function returns an iterator, which we can use to iterator over the entries in the directory. The iterator will yield instances of io::Result<DirEntry>, this is because, during the iteration, new errors can be encountered. After unwrapping the result from the iterator, we get a DirEntry which has a metadata() method.

    The metadata for an entry will tell us if the entry is a file or a directory. If it's a directory, we call the read_directory() method with the path to the directory.

    Note that many of the functions including read_dir() return a Result which should be correctly handled but we'll ignore the errors for this program to keep the code simple.

    Now that we have a function that can recursively iterate over all folders in a directory, all we have to do is pass the HashMap initialized in the main function to the read_directory() function where we can keep track of the count for each file type using the files extension. Listing 3 below shows the complete code.

    Listing 3

    use std::env;
    use std::fs;
    use std::collections::HashMap;
    use std::thread;
    
    fn main() {
    
        let args : Vec<String> = env::args().collect();
    
        let mut map : HashMap<String, usize> = HashMap::new();
    
        read_directory(&args[1], &mut map);
    
        for (ext, count) in map{
            println!("{ext}\t\t{count}");
        }
    }
    
    fn read_directory(path : &str, map : &mut HashMap<String, usize>){
    
        let dir = fs::read_dir(path).unwrap();
    
        for result in dir{
    
            let entry = &result.unwrap();
            let metadata = entry.metadata();
    
            if metadata.unwrap().is_dir(){
                let dir_name = entry.file_name().to_str().unwrap().to_string();
    
                if dir_name.chars().nth(0).unwrap() != '.' {
                    read_dir(entry.path().to_str().unwrap(), map);
                }
            }else{
                let file_name = entry.file_name().to_str().unwrap().to_string();
    
                if file_name.contains("."){
                    let (_file, ext) = file_name.split_once(".").unwrap();
    
                    if map.contains_key(ext){
                        let count = map.get(ext).unwrap_or(&0);
                        let total = *count + 1;
                        map.insert(ext.to_string(), total);
                    }else{
                        map.insert(ext.to_string(), 1);
                    }
                }
            }
        }
    }
    
  • A Simple HTTP Server - ThreadPool - Part 3 15 January 2023 - 4024 views
    In part 1, we developed a simple single-threaded HTTP server that processed requests synchronously. In part 2, we modified the server so that each request was handled in a separate thread, which improved concurrency but increased the load on the CPU and memory. In this article, we'll modify the server to use a ThreadPool which will help to improve concurrency while reducing the CPU and memory usage.

    Contents


    A Simple HTTP Server - Part 1
    A Simple HTTP Server - Multi Threading - Part 2
    A Simple HTTP Server - ThreadPool - Part 3

    ThreadPools


    Before we begin developing a ThreadPool, let me explain what a ThreadPool is for those who are not sure. Simply put, a ThreadPool is a collection of per-initialized idle threads that are waiting to do some work. When a thread in the collection receives a task, it executes it, once it's done, the thread goes back to waiting for a new task. Reusing threads in this manner allows us to achieve a greater level of concurrency without burdening the system's resources.

    Let's take the basic design of a ThreadPool and write some code to handle multiple client connections in our HTTP server. We know that we need to create threads and initialize them so that they are ready and waiting to accept a task. A good place to start is to create a custom thread class that inherits from the Thread class.

    Listing 1

    class SocketThread extends Thread{
    
        public void run(){
            // Do work here
        }
    }
    

    The SocketThread class above will need to be initialized and started before the server begins accepting client connections. When the server accepts a new client connection, we need to pass the connection to one of the threads in our pool. We can add a setSocket() method to the SocketThread class and call this method during the accept client connection phase.

    Listing 2

    class SocketThread extends Thread{
    
        public void setSocket(Socket socket){ 
    	// To-Do
        }
        
        ...
    }
    

    The setSocket() method, needs to store the socket in a collection so that it can be queued. The run() method will then pop a socket from the beginning of the queue and continue the client-handling process. We can use a LinkedBlockingQueue to store sockets as this type of queue is thread-safe.

    Listing 3

    class SocketThread extends Thread{
    
        private final LinkedBlockingQueue<Socket> queue;
    
        public SocketThread(){
            this.queue = new LinkedBlockingQueue<Socket>();
        }
    
        public void setSocket(Socket socket){ 
            this.queue.add(socket);
        }
    
        public void run(){
            // Do work here
        }
    }
    


    The run() method gets called when the thread is started. Once the method has finished executing, the thread is terminated. To keep the thread alive, we need to stop the run() method from completing. You may be tempted to use an infinite loop but this will cause the system's CPU to spike. Thankfully, the LinkedBlockingQueue class has a take() method that blocks until an item in the collection becomes available.

    In our case, the take() method will block the SocketThread until a socket is available in the queue. Once the socket becomes available we can continue handling the client request/response. At this point, the thread will no longer be blocked and so it will terminate when the run() method completes. To keep the thread alive, we simply call the run() method again making it a recursive method. Listing 4 below shows the complete SocketThread class which includes handling the client request and response.

    Listing 4

    class SocketThread extends Thread{
    
        private final LinkedBlockingQueue<Socket> queue;
    
        public SocketThread(){
            this.queue = new LinkedBlockingQueue<Socket>();
        }
    
        public void setSocket(Socket socket){ 
            this.queue.add(socket);
        }
    
        public void run(){
            try{
    
                Socket client = this.queue.take();
    
                // Get A BufferedReader/BufferedWriter to handle reading and writing to the stream.
    
                BufferedReader requestReader = 
                            new BufferedReader(
                                new InputStreamReader(client.getInputStream()));
                                
                BufferedWriter responseWriter = 
                            new BufferedWriter(
                                new OutputStreamWriter(client.getOutputStream()));
    
                // Read all the data sent from 
                // the client before we send a response.
    
                while (true){
                    String headerLine = requestReader.readLine();
    
                    if (headerLine.length() == 0){
                        break;
                    }
                }
    
                // How original is this?
                responseWriter.write("Hello World\n");
                responseWriter.flush();
    
                // Closing the client connection will close, both the input and output streams.
                client.close();
    
                this.run();
    
            }catch(Exception e){
                e.printStackTrace();
            }
        }
    }
    

    We can now create several instances of SocketThread to simulate a ThreadPool. Normally a ThreadPool will abstract away the creation of threads and manage an internal array of the threads but I want to keep the code simple, the reason will become clearer later.

    Let's create two instances of SocketThread in the main method and call the start() method on them. When the threads start, the run() method of each thread will wait for a socket.

    Listing 5

    import java.net.*;
    import java.io.*;
    import java.util.concurrent.LinkedBlockingQueue;
    
    class HttpServer{
    
        public static void main(String args[]){
    
            SocketThread thread1 = new SocketThread();
            thread1.start();
    
            SocketThread thread2 = new SocketThread();
            thread2.start();
            
            ...
        }
    }
    

    When the server accepts a socket, we can call the setSocket() method on one of the threads we've created. Since we have two threads, we need to find a way to decide which thread to use. We can solve this problem by alternating between each thread as shown below.

    Listing 6

    
    import java.net.*;
    import java.io.*;
    import java.util.concurrent.LinkedBlockingQueue;
    
    class HttpServer{
    
        public static void main(String args[]){
    
            SocketThread thread1 = new SocketThread();
            thread1.start();
    
            SocketThread thread2 = new SocketThread();
            thread2.start();
    
            try{
                // Create a new server socket and listen on port 9000
                try (ServerSocket server = new ServerSocket(9000)){
    
                    // Continue to listen for client connections
    
                    int i = 0;
                    
                    while (true){
    
                        // Accept a client connection. accept() is a blocking method.
                        Socket client = server.accept();
    
                        if (i % 2 == 0){
                            thread1.setSocket(client);
                        }else{
                            thread2.setSocket(client);
                        }
    
                        i++;
                    }
                }
    
            }catch(Exception e){
                e.printStackTrace();
            }
        }
    }
    

    An alternative approach to alternating between the threads is to pick a thread with the least number of sockets in the queue. To achieve this, we could expose the size of the queue as a public method and then do a comparison between the two threads.

    Earlier I mentioned that there was a reason why I wanted to keep the code simple and that's because Java has built in classes to help create ThreadPools. In the next article, we'll refactor our code to make use of the ExecutorService API so we won't need to create and manage thread instances ourselves.