Thursday, September 14, 2017

Demo App for REST enabled SQL


Getting Started


 The new Oracle REST Data Services SQL over REST.  How to enable that is on my last blog post here: http://krisrice.blogspot.com/2017/09/ords-173-beta-introducing-rest-enabled.html

cURL Examples

The simplest way to test this new feature out is with a curl command sending over the SQL.


$ curl -X "POST" "http://localhost:9090/ords/hr/_/sql"  \
       -H "Content-Type: application/sql"               \
       -u HR:oracle                                     \
   -d $'select * from dual;' 


There are a number of other curl based examples now in the example github project : https://github.com/oracle/oracle-db-tools/tree/master/ords/rest-sql . The examples try to cover the various type of output that can be returned.  This includes trying to use SPOOL which is a restricted command,  DDL, a full sql script, a SQLcl command "DDL" and others.




A better way


cURL is great but a web page is much more dynamic to show off such features.  Dermot who created this new feature created a demo page to show off many of the features as he showed off in this tweet ( hint follow him ) 

Starting from that example there a file in that same github folder.
This is a full test page now based in Oracle JET , ORDS and Code Mirror that can be placed on your ORDS server ( for CORS reasons ) and served up.  There's a series of inputs on the left, output on the right hand side and finally some examples on the bottom of performing the call in cURL, JQuery , or SQLcl.




The most useful thing in this demo page is the Examples drop list.  There's everything from a trivial select from dual to doing POST of a fully form JSON document of the command to run such as a select with a bind of a VARRAY


{
  "statementText": "SELECT ? as col_ARRAY FROM dual",
  "offset": 0,
  "limit": 5,
  "binds":[
{"index":1,"data_type":"VARRAY", "type_name":"ADHOC_VARRAY_NUMBER","value":[1,5,3]}
]
}


The Output


The returning JSON can vary quite a lot depending on what is being sent. However the basic structure is quite simple. There is an array of "items" which is the statements processed. The variance comes in depending on what is being sent.

For a Query there will be a item[N].resultSet.  This will then have a child of metadata which is the columns, datatypes, json scrubbed name, real name. Peered to this is another "items" which has an array of the rows from the select.

For NON-Query there is an item[N].response which contains the text of what the command did.

And it can get more complicated from there.

{
    "env": {
        "defaultTimeZone": "America/New_York"
    },
    "items": [
        {
            "statementId": 1,
            ....
        },
.....



Here's a short example and it's corresponding output:
spool a
select 1 from dual;
DESC dual;
begin
 null;
end;
/
spool off



{
    "env": {
        "defaultTimeZone": "America/New_York"
    },
    "items": [
        {
            "statementId": 1,
            "statementType": "sqlplus",
            "statementPos": {
                "startLine": 1,
                "endLine": 1
            },
            "statementText": "spool a",
            "response": [
                "SP2-0738: Restricted command: \n\"spool a\"\nnot available",
                "\n"
            ],
            "result": 0
        },
        {
            "statementId": 2,
            "statementType": "query",
            "statementPos": {
                "startLine": 2,
                "endLine": 2
            },
            "statementText": "select 1 from dual",
            "response": [],
            "result": 0,
            "resultSet": {
                "metadata": [
                    {
                        "columnName": "1",
                        "jsonColumnName": "1",
                        "columnTypeName": "NUMBER",
                        "precision": 0,
                        "scale": -127,
                        "isNullable": 1
                    }
                ],
                "items": [
                    {
                        "1": 1
                    }
                ],
                "hasMore": false,
                "limit": 1500,
                "offset": 0,
                "count": 1
            }
        },
        {
            "statementId": 3,
            "statementType": "sqlplus",
            "statementPos": {
                "startLine": 3,
                "endLine": 3
            },
            "statementText": "DESC dual",
            "response": [
                "Name  Null? Type        \n----- ----- ----------- \nDUMMY       VARCHAR2(1) \n"
            ],
            "result": 0
        },
        {
            "statementId": 4,
            "statementType": "plsql",
            "statementPos": {
                "startLine": 4,
                "endLine": 7
            },
            "statementText": "begin\n null;\nend;",
            "response": [
                "\nPL/SQL procedure successfully completed.\n\n"
            ],
            "result": 0
        },
        {
            "statementId": 5,
            "statementType": "sqlplus",
            "statementPos": {
                "startLine": 8,
                "endLine": 8
            },
            "statementText": "spool off",
            "response": [
                "SP2-0738: Restricted command: \n\"spool off\"\nnot available",
                "\n"
            ],
            "result": 0
        }
    ]
}

Wednesday, September 06, 2017

ORDS 17.3 Beta - Introducing REST enabled SQL

Download

Got get it on the normal ORDS download page

Versioning 


First and most obvious is ORDS is now on the same versioning scheme as SQL Developer, SQLcl and Oracle Cloud.  That is <year>.<quarter>.<patch> and the same tail we've always had which is <julian day>.<HH24>.<MI>.  That makes this beta ords.17.3.0.248.08.45.zip On to the features.


REST Enabled SQL


Once again the core sql engine from SQL Developer that was wrapped into the command line  SQLcl has been used for another feature. This same library is now used in many places in Oracle including the install of Grid Infra for anyone running RAC databases to the Developer Cloud Service to add Hudson build options for database deployments.

The new feature we are naming REST enabled SQL which in reality is more of REST enabled SQLcl. The feature is OFF by default and can be activated with the following line added to the defaults.xml file.

<entry key="restEnabledSql.active">true</entry>


Once that option is enabled, there is a now an endpoint enabled for EVERY REST enabled schema such as http://localhost:9090/ords/klrice/_/sql . This endpoint is a POST only and can be authenticated to in 2 manners.

  1. Web Server level authenticated user with "SQL Developer" role will be able to access any REST enabled schema.  Yes, that means any REST enabled schema so ensure to use this properly.
  2. DB Authentication. This method will as implies only be allowed to access the same DB Schema it is authenticated to. So HR can access  http://localhost:9090/ords/hr/_/sql  only.


Then it's as simple as calling the REST point, authenticating and tossing any amount of sql at it. Either a singular sql statement or an entire scripts.


$ curl -X "POST" "http://localhost:9090/ords/hr/_/sql"  \
       -H "Content-Type: application/sql"               \
       -u HR:oracle                                     \
   -d $'select count(1) abc from user_objects;select * from dual;' 


{
    "env": {
        "defaultTimeZone": "America/New_York"
    },
    "items": [
        {
            "response": [],
            "result": 0,
            "resultSet": {
                "count": 1,
                "hasMore": false,
                "items": [
                    {
                        "abc": 35
                    }
                ],
                "limit": 1500,
                "metadata": [
                    {
                        "columnName": "ABC",
                        "columnTypeName": "NUMBER",
                        "isNullable": 1,
                        "jsonColumnName": "abc",
                        "precision": 0,
                        "scale": -127
                    }
                ],
                "offset": 0
            },
            "statementId": 1,
            "statementPos": {
                "endLine": 1,
                "startLine": 1
            },
            "statementText": "select count(1) abc from user_objects",
            "statementType": "query"
        },
        {
            "response": [],
            "result": 0,
            "resultSet": {
                "count": 1,
                "hasMore": false,
                "items": [
                    {
                        "dummy": "X"
                    }
                ],
                "limit": 1500,
                "metadata": [
                    {
                        "columnName": "DUMMY",
                        "columnTypeName": "VARCHAR2",
                        "isNullable": 1,
                        "jsonColumnName": "dummy",
                        "precision": 1,
                        "scale": 0
                    }
                ],
                "offset": 0
            },
            "statementId": 2,
            "statementPos": {
                "endLine": 3,
                "startLine": 3
            },
            "statementText": "select * from dual",
            "statementType": "query"
        }
    ]
}




The fine print.

Supported Commands


There's a number of things in this SQLcl library that are disabled as they touch the host operating systems or reach out to the network. Appendix D of the ORD Documentation lists these but to give a flavor for what they are things like
  • host
  • spool
  • @, @@, start
  • connect
  • cd
  • ....
Basically, if the command can touch/read/write the file system in any way, nope. If the command can reach out over the network, nope.

Number of Rows returns

Also the number of rows returnable is governed by a flag in defaults.xml to prevent a runaway query. Exporting a bazillion rows is not a use for this feature. 
<entry key="jdbc.maxRows">1500</entry>



Coming Next...

This feature not only supports a 'plain' sql script but there's a JSON language to sending more robust requests. This is a short example that shows some of the powerful features sending in a select with an offset, a limit, bind variables and a  SCN number.






Wednesday, July 12, 2017

Profiling a Java + JDBC Application

NetBeans


First, there's NO Java coding needed nor Java source code needed to profile a Java program this way.  NetBeans added this a while back up I just found it recently.  The ability to attach to any Java program and profile the SQL going across JDBC. The dev team's blog on it is here: http://jj-blogger.blogspot.nl/2016/05/netbeans-sql-profiler-take-it-for-spin.html


SQLcl

SQLcl is our Java library for scripting sql/sql scripts that has been in SQLDev since day 0 back in '05/06.  We factored that and wrapped a cmd line around it.  This makes it easier to test for features, regressions, performance,....  as well to give a new cmd line with extended features.  This library is also what performs the Grid Infra installs these days as well embedded in Oracle REST Data Services. It's quite proven and tested. This is all Java bases using plain JDBC to talk to the database.  It's no different than any java based application which means anything done to profile it is applicable to any java program like say sqldev , ords, custom jdbc , any java program.




Profiling


This new feature in Netbeans is very simple to use and there's no need to have the sources of the jsvs code.  Off the Profile menu - > Attach to External Process






Then set the Profile to SQL Queries


Click Attach, which shows a list of running java processes.  This is what SQLcl will look like.





Running the Program

Now once JDBC traffic starts being issued, it's captured with timings and occurrences of that statement along with the Java stack for where the call originated. Next up is the hardest part, what the heck does all this data mean? When is fast , fast enough?

What to change?


Below is what an APEX install looks like on my laptop during the middle of the process.  There's a lot of data to look at. The slowest statement is the dbms registry validation. Is that bad, can it be sped up? Probably not. The most called is the check for DBMS_OUPUT. Can that be reduced? Also, probably not.

This is when knowledge of the code and intended actions are critical.  For me, getting SQLcl the program from 19m down to 7m was fast enough. That was done with zero changes to the APEX install scripts but just from watching the traffic going to the database and analyzing that.

Change #1 : SQLcl was name resolving every create or replace <PLSQL OBJECT> then checking for errors on that object.  Much faster is to simply check count(1) from user_errors without the name resolution.  When there's no errors in the user_errors table, there's no need to name resolve. So that entire path was shortened.  It's visible in this stack with the 1,106 time "select count(1) cnt from user_errors" was called.  

Change #2: DBMS_OUTPUT was being called after any/all commands to the database. That was reduced to only calls that could invoke some output. For Example, alter session doesn't need to be checked. That change reduced the number of db calls being issued at all. Fastest call is the ones you don't make.

and on and on.

Nothing is more important than the knowledge of the intended outcome.








Tuesday, July 11, 2017

SQLcl 17.2

New Versioning Scheme

Starting with this release the numbering scheme is changed.  All releases will now be the YEAR<period>Quarter<period>build numbers.

So the new SQLcl is 17.2.0.184.0917.  

Breaking that down. 
  • 17   - Year
  • 2     - Quarter
  • 0     -  Patch number
  • 184 - Day in Julian
  • 0917 - hour and minute the build was done.

New Features

Securing Literals  which was introduced here : http://krisrice.blogspot.com/2015/09/sqlcl-more-secure-now-with-rest.html so this is not new.  What is new is controls over when it's done.  It was set so that SQLcl did secure all literals for anything that was issued. Now there's a control for when/how deep to check.

The default is any anonymous block less than 10 lines will be scrubbed automatically.  This will catch the majority of uses.  To ratchet up what is checked  "set secureliterals ON" will secure every block completely.  There is a performance impact to this if there are very large block such as in the APEX installation which has some blocks over 1k in size.

The opposite is there also to disable this feature: set secureliterals OFF


Here's an example of what happens. The 'abcxyz' is removed and turned into a bind :SqlDevBind1ZInit1


SQL> declare
  2    l_local varchar2(20);
  3  begin
  4    l_local := 'abcxyz';
  5    dbms_output.put_line(l_local || chr(10));
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select sql_text from v$sql where sql_text like '%abcxyz%';
SQL_TEXT      
                                                                                                                                                                                                                                  
DECLARE 
SqlDevBind1Z_1 VARCHAR2(32767):=:SqlDevBind1ZInit1;  
BEGIN 
   declare   
       l_local varchar2(20); 
   begin   
       l_local := 'abcxyz';   
       dbms_output.put_line(l_local || chr(TO_NUMBER( SqlDevBind1Z_1))); 
   end;  
 :AUXSQLDBIND1:=SqlDevBind1Z_1;  
END;  



New Performance


So I spent the better part of 2 week in the NetBeans profiler and the outcome is well worth the time.  ALL these numbers are on my laptop so milage will vary.  APEX is probably one of the largest / complicated set of sql / plsql scripts to install into a database so I used that as my baseline. The SQLcl version I started from took 19m27.352s to install APEX.  For comparison, I ran the same install with SQL*PLUS that took almost 10 full minutes less at 9m59.789s.  SOOOO clearly there was an issue here.

The key thing is knowing WHAT your application should be doing and how it should be doing it.  There were a number of things that SQLcl was being overly aggressive  about such as securing literals  which was introduced here http://krisrice.blogspot.com/2015/09/sqlcl-more-secure-now-with-rest.html Then there were calls that were repetitive and could simple be removed.  Then the next boost was from being more lax on dbms_output.  SQLcl was checking for things like DDL that clearly can't have output so no need to check.

The end result is that turned secure literals off and it now takes on my machine 7m17.635s




Thursday, June 29, 2017

Parameterizing Jmeter for testing APEX

A while ago we needed to stress a system by using the APEX Brookstrut demo application.  The obvious choice for this was Jmeter.  How to setup Jmeter to record web traffic by becoming a web proxy is very known and well written process.  Anyone that hasn't seen it, check this PDF and you can see how easy it is. There were a couple issues to get around. First importing the application again and again which may get a different Application ID with each import. Next is the hostname and port may change depending on the environment. Then there's using the same session ID in apex to avoid generating a new session more than needed.

That led to this setup.

Configuration

The first 3 parts are for tracking the cookies, any http header but most important is the User Defined section. In here, I defined variables for the webserver ( aka ORDS ) hostname/port and protocol also the APEX application ID and the Home Page for the test.






The next step down is to define the HTTP Request Defaults. This is where the user variables start to come into play as you can see. The servername/port/protocol are all referencing the variables from above.






Initial Request

The next part of the test is to hit the home page of the application with a bogus session ID. This kicks apex into creating a real session ID.






Now to get that session ID for later I put in the JSR 223 Post Processor seen in the tree right below the test.




The actual javascript for the test is in this Gist:


The javascript is extracting the p_instance, p_flow_id,... which you see at the bottom of the script.  These are then placed into the dictionary for all subsequent requests to reference.


Now the only task left is to go to the recorded test an replace all the recorded parameters which are hardcoded to specific flowid, sessionid,.. and replace them with the variables from the dictionary.  For example this shows the ${p_flow_id} and ${_instance}





Now there's a recorded parameterized test that can be changed to point to any installation of an application quite easily.