Pages

Tuesday, May 05, 2015

REST Data Services and SQL Developer












The database tools team released 3 new GA releases and an update to our SQLCL.


Official Releases are here:

   SQL Developer, Modeler, and Data Miner:
       https://blogs.oracle.com/otn/entry/news_oracle_updates_development_tools
       https://blogs.oracle.com/datamining/entry/oracle_data_miner_4_1

  REST Data Services now with SODA
       https://blogs.oracle.com/otn/entry/news_oracle_rest_enables_oracle
       https://blogs.oracle.com/databaseinsider/entry/oracle_database_12c_as_a

Some of the feedback we received was that ORDS was difficult to setup and running.  This prompted an overhaul of the install process.  There is now a few ways to get going.  As with earlier releases, there is a command line.  There is now a 'simple' , 'advanced' , and an option to parameterize ( silent ) install.  The simple is the default and will ask minimal information.  The catch is that it will make assumptions.  Some of those assumptions like a USERS tablespace may not work for your database.  In this case try the advanced, it will prompt for _everything_ and then some for a fully customized install.





Taking the simple to install one step more, REST Data Services is now embedded into SQL Developer to be able to get up and running faster than ever.  Under the Tools menu there is now options to Manage,Run,Install, and Uninstall ( although why would anyone? ).




The first page of the installation will ask for 2 things.  First if there is a specific ORDS to use or to use the embedded one.  The version number is printed for which ever is chosen to ensure the proper decision.  The second is simply where to store the configuration.  This configuration is just the normal files and could be portable to take to any other ORDS to set it up.

The next is the database credentials and connect strings.  There is something new here in that the user is now ORDS_PUBLIC_USER.  This is a change that has come in the decoupling from Application Express.  Now ORDS is completely standalone and Apex is no longer required.
If connecting to the ORDS_PUBLIC_USER fails ( as in user doesn't exist ),  the wizard will prompt for the credentials needed to install ORDS.
When using APEX or any OWA toolkit based applications, the db user for those to be executed as needs to be entered.

In ORDS 2.x, the repository where the REST definitions were stored was inside Application Express' repository.  ORDS 3.0 has it's on dedicated ORDS_METADATA database user which holds this information.  This means that there could be definitions in both places.  For Apex 5.0, the screens in the sql workshop will continue to operate against the tables in the Apex repository.  That means to continue to operate this wizard will have to be filled out for the 2.0 users to access this data.

When operating like this.  ORDS will use the metadata from both the 3.0 repository and the Apex repository.  This allows the flexibility to the developer where they would continue to develop.  The new features such as Auto Enablement will only be in the new ORDS 3.0 repository.

What this means to you.
  - Continue to use the APEX screens, everything will work just fine
  - Using the REST definitions in SQL Developer, will go into the new ORDS schema
  - If there is a conflict, ORDS repository will be chosen over the Apex repository.




This screen will give the option to start ORDS up as soon as the wizard finishes installing and setting up.  Also is the location for the static file, think /i/ for apex.


The ORDS Administrator and ORDS RESTful User are used to connect to the running ORDS from SQL Developer for remote administrations.

A quick final review of the options and it's up and running.




Friday, May 01, 2015

Logstash and Oracle Database

    For anyone that doesn't know what Logstash is head over to http://logstash.net/ and watch the video on the home page.  Mark Rittman has done a great blog post on the full ELK stack with OBIEE.

    This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql.  A huge caution that this is a 0.01 attempt and will get better as I need it to.  This is a basic config of doing a select sessions every 5 seconds and report that in this case to just STDOUT.  There are many choices of where to send this information like to elastic search for visualization into Kibana as Mark showed in the OBIEE example.

    If anyone uses this, changes it, likes it, doesn't like it,  let me know.

input { 
  orasql { sql      => "SELECT count(1) sessions from v$session "
           dbuser   => "klrice"
           dbpasswd => "klrice"
           dburl    => "localhost/orcl"
           interval =>  5
      } 
}
output {
 stdout { codec => rubydebug }
}

The running of this is just needs to add in the plugin path for be able to find the orasql input filter.

     The filter itself is where everything oracle-wise is happening.  Since Logstash is written in JRuby, I'm able to use the normal Oracle JDBC Driver to access the database.  For this case I have a ./lib folder where I have a local copy.  This could just as easily be $ORACLE_HOME/jdbc/lib/ojdbc6.jar to ensure using the same as the database version when on the same server.

     Since the connect and establishing a session is expensive when repeatedly issuing sql, I have kept a connection open.  The connection could be closed for any number of reasons so there's an auto-reconnect in here also by calling the getConnection before any sql exec.   Before someone asks: no, sqldev will not get an auto-reconnect like this.




# encoding: utf-8
require "logstash/inputs/base"
require "logstash/namespace"
require "socket" # for Socket.gethostname
require 'java'
# SETUP THE JDBC DRIVER TO YOUR LOCATION
$CLASSPATH << "lib/ojdbc6.jar"

java_import 'oracle.jdbc.OracleDriver'
java_import 'java.sql.DriverManager'


# Run sql line tools and capture the whole output as an event.
#
# Notes:
#
# * The '@source' of this event will be the sql run.
# * The '@message' of this event will be the entire stdout of the sql
#   as one event.
#
class LogStash::Inputs::OraSQL < LogStash::Inputs::Base

  config_name "orasql"
  milestone 2
   
  $currConnection = nil

  default :codec, "plain"

  # Set this to true to enable debugging on an input.
  config :debug, :validate => :boolean, :default => false, :deprecated => "This setting was never used by this plugin. It will be removed soon."

  # SQL to run. For example, "select * from emp"
  config :sql, :validate => :string, :required => true

  # dbuser to run. For example, "select * from emp"
  config :dbuser, :validate => :string, :required => true ,  :default => "/"

  # dbpass to run. For example, "select * from emp"
  config :dbpasswd, :validate => :string, :required => false , :default => ""

  # dburl to run. For example, "select * from emp"
  config :dburl, :validate => :string, :required => true , :default => "//localhost/orcl"

  # Interval to run the sql. Value is in seconds.
  config :interval, :validate => :number, :required => true , :default => 120

  public
  def register
    @logger.info("Registering SQL Input", :type => @type,
                 :sql => @sql, :interval => @interval)
  end # def register
  
  public 
  def getConnection
      if $currConnection == nil  or  ! $currConnection.isValid(100)  
           oradriver = OracleDriver.new
           DriverManager.registerDriver oradriver

           con_props = java.util.Properties.new
           con_props.setProperty("user", @dbuser)
           con_props.setProperty("password", @dbpasswd )
 
           conn =  Java::oracle.jdbc.OracleDriver.new.connect('jdbc:oracle:thin:@' + @dburl, con_props)

           conn.auto_commit = false

           $currConnection = conn
      end 
      
      return $currConnection  
       
  end # end getConnection

  public
  def run(queue)
    hostname = Socket.gethostname
   
    loop do
      start = Time.now
      @logger.info? && @logger.info("Running SQL", :sql => @sql)

      conn = getConnection

      stmt = conn.prepare_statement @sql
      rset = stmt.execute_query
      while ( rset.next )
         i=1
         event =  event = LogStash::Event.new
         decorate(event)
         cols = rset.getMetaData.getColumnCount
         msg = ""
         r=0
         while ( i <= cols ) 
             val = rset.getString(i)
             if ( val != nil ) 
                if ( r > 0 )
                   msg = msg + ","
                end
                event[ rset.getMetaData.getColumnName(i).downcase ] =  val
                msg = msg +  "\"" +rset.getMetaData.getColumnName(i).downcase +  "\" : \"" + val + "\"" 
                r=r+1
             end
             i = i + 1
         end
         event['message'] = "{" + msg + "}"
        queue << event
      end
      conn.close

      duration = Time.now - start
      @logger.info? && @logger.info("Command completed", :sql => @sql,
                                    :duration => duration)

      # Sleep for the remainder of the interval, or 0 if the duration ran
      # longer than the interval.
      sleeptime = [0, @interval - duration].max
      if sleeptime == 0
        @logger.warn("Execution ran longer than the interval. Skipping sleep.",
                     :sql => @sql, :duration => duration,
                     :interval => @interval)
      else
        sleep(sleeptime)
      end
    end # loop
  end # def run
end # class LogStash::Inputs::OraSQL





Thursday, April 16, 2015

Repeating another SQLCL EA release

    It's been a month and a lot of bugs have been fixed in that time and new features added. That adds up to time for another EA for our new sqlcl.  The third sqlcl EA is out now and can be found here on the SQL Developer 4.1 download page.  

  Not sure what this sqlcl thing is ? Check here.


LOAD:  Making CSV loading to the db very easy

Here's a very quick example of the possibilities.  Here 

-- drop it to be sure
drop table emp2;

-- create an empty table
create table emp2 as select * from emp where rownum=0;

-- don't need to see it
set term off
-- set the format to CSV
set sqlformat csv
-- spool
spool emp.csv
-- spooling....
select * from emp;
-- spooled
spool off

-- need to see again
set term on
-- load into the new table
load EMP2 emp.csv



The output from running the script makes it very easy to see what happened.



REPEAT : Build your own monitor

This is another very simple yet useful command.  The idea is to basically build your own sql that your may want to run to see what's going on.  This example is a simple select against v$session with a decode on the command to see what the sessions are doing.  Running this is easy, running it again and again mean hitting / and return a lot.

Enter the repeat.  Just specify how many times and what the sleep interval is and sqlcl takes care of running it.



In the header, there's a status to who what iteration it is what the time is ( and yes I know a bug in the number format ) and what the delay is to running it again.




The Report being used

The SQL I'm using here is the same sql in the SQL Developer Sessions report.  It can be found off the Tools->Sessions menu.  Additionally in the Reports section under DBA reports.



select vs.sid ,serial# serial, vs.sql_id,
       vs.username "Username",
       case when vs.status = 'ACTIVE' then last_call_et else null end "Seconds in Wait",
       decode(vs.command,  
                         0,null, 
                         1,'CRE TAB', 
                         2,'INSERT', 
                         3,'SELECT', 
                         4,'CRE CLUSTER', 
                         5,'ALT CLUSTER', 
                         6,'UPDATE', 
                         7,'DELETE', 
                         8,'DRP CLUSTER', 
                         9,'CRE INDEX', 
                         10,'DROP INDEX', 
                         11,'ALT INDEX', 
                         12,'DROP TABLE', 
                         13,'CRE SEQ', 
                         14,'ALT SEQ', 
                         15,'ALT TABLE', 
                         16,'DROP SEQ', 
                         17,'GRANT', 
                         18,'REVOKE', 
                         19,'CRE SYN', 
                         20,'DROP SYN', 
                         21,'CRE VIEW', 
                         22,'DROP VIEW', 
                         23,'VAL INDEX', 
                         24,'CRE PROC', 
                         25,'ALT PROC', 
                         26,'LOCK TABLE', 
                         28,'RENAME', 
                         29,'COMMENT', 
                         30,'AUDIT', 
                         31,'NOAUDIT', 
                         32,'CRE DBLINK', 
                         33,'DROP DBLINK', 
                         34,'CRE DB', 
                         35,'ALTER DB', 
                         36,'CRE RBS', 
                         37,'ALT RBS', 
                         38,'DROP RBS', 
                         39,'CRE TBLSPC', 
                         40,'ALT TBLSPC', 
                         41,'DROP TBLSPC', 
                         42,'ALT SESSION', 
                         43,'ALT USER', 
                         44,'COMMIT', 
                         45,'ROLLBACK', 
                         46,'SAVEPOINT', 
                         47,'PL/SQL EXEC', 
                         48,'SET XACTN', 
                         49,'SWITCH LOG', 
                         50,'EXPLAIN', 
                         51,'CRE USER', 
                         52,'CRE ROLE', 
                         53,'DROP USER', 
                         54,'DROP ROLE', 
                         55,'SET ROLE', 
                         56,'CRE SCHEMA', 
                         57,'CRE CTLFILE', 
                         58,'ALTER TRACING', 
                         59,'CRE TRIGGER', 
                         60,'ALT TRIGGER', 
                         61,'DRP TRIGGER', 
                         62,'ANALYZE TAB', 
                         63,'ANALYZE IX', 
                         64,'ANALYZE CLUS', 
                         65,'CRE PROFILE', 
                         66,'DRP PROFILE', 
                         67,'ALT PROFILE', 
                         68,'DRP PROC', 
                         69,'DRP PROC', 
                         70,'ALT RESOURCE', 
                         71,'CRE SNPLOG', 
                         72,'ALT SNPLOG', 
                         73,'DROP SNPLOG', 
                         74,'CREATE SNAP', 
                         75,'ALT SNAP', 
                         76,'DROP SNAP', 
                         79,'ALTER ROLE', 
                         79,'ALTER ROLE', 
                         85,'TRUNC TAB', 
                         86,'TRUNC CLUST', 
                         88,'ALT VIEW', 
                         91,'CRE FUNC', 
                         92,'ALT FUNC', 
                         93,'DROP FUNC', 
                         94,'CRE PKG', 
                         95,'ALT PKG', 
                         96,'DROP PKG', 
                         97,'CRE PKG BODY', 
                         98,'ALT PKG BODY', 
                         99,'DRP PKG BODY', 
                         to_char(vs.command)) "Command",
    vs.machine "Machine",
    vs.osuser "OS User", 
    lower(vs.status) "Status",
    vs.program "Program",
    vs.module "Module",
    vs.action "Action",
    vs.resource_consumer_group,
    vs.client_info,
    vs.client_identifier
from gv$session vs 
where vs.USERNAME is not null
and nvl(vs.osuser,'x') <> 'SYSTEM'
and vs.type <> 'BACKGROUND'
order by 1;

Friday, January 30, 2015

3 new things about sdsql


New Name !


The first is a new name this EA it's named sqlcl for sql command line.  However, the binary to start it up is simply sql.  Nothing is easier when you need to run some sql than typing 'sql' and hitting enter.


#./sql klrice/klrice@//localhost/orcl

SQLcl: Release 4.1.0 Beta on Fri Jan 30 12:53:05 2015

Copyright (c) 1982, 2015, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 


There's never too much Information


The next is a new information command has a lot more in it now.  Hit me up on twitter if you can think of more things it should show some of the things here are directly from folks doing exactly that. Sometimes functional specs can be done in 140 chars.



KLRICE@ORCL>
KLRICE@ORCL>info hr.employees
TABLE: EMPLOYEES 
  LAST ANALYZED:2015-01-17 12:06:26.0 
  ROWS         :107 
  SAMPLE SIZE  :107 
  INMEMORY     :DISABLED 
  COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    COMMENTS
*EMPLOYEE_ID     NUMBER(6,0)         No               Primary key of employees table.
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes              First name of the employee. A not null column.
 LAST_NAME       VARCHAR2(25 BYTE)   No               Last name of the employee. A not null column.
 EMAIL           VARCHAR2(25 BYTE)   No               Email id of the employee
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes              Phone number of the employee; includes country code and area code
 HIRE_DATE       DATE                No               Date when the employee started on this job. A not null column.
 JOB_ID          VARCHAR2(10 BYTE)   No               Current job of the employee; foreign key to job_id column of the
                                                      jobs table. A not null column.
 SALARY          NUMBER(8,2)         Yes              Monthly salary of the employee. Must be greater
                                                      than zero (enforced by constraint emp_salary_min)
 COMMISSION_PCT  NUMBER(2,2)         Yes              Commission percentage of the employee; Only employees in sales
                                                      department elgible for commission percentage
 MANAGER_ID      NUMBER(6,0)         Yes              Manager id of the employee; has same domain as manager_id in
                                                      departments table. Foreign key to employee_id column of employees table.
                                                      (useful for reflexive joins and CONNECT BY query)
 DEPARTMENT_ID   NUMBER(4,0)         Yes              Department id where employee works; foreign key to department_id
                                                      column of the departments table

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

KLRICE@ORCL>




Info doesn't end there, you can do info+ and get different data and I'm in the process of changing the indexes to show some stats also like the main table.

KLRICE@ORCL>info+ hr.employees
TABLE: EMPLOYEES 
  LAST ANALYZED:2015-01-17 12:06:26.0 
  ROWS         :107 
  SAMPLE SIZE  :107 
  INMEMORY     :DISABLED 
  COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM  
*EMPLOYEE_ID     NUMBER(6,0)         No               100                   206                   107            NONE       
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes              Adam                  Winston               91             NONE       
 LAST_NAME       VARCHAR2(25 BYTE)   No               Abel                  Zlotkey               102            NONE       
 EMAIL           VARCHAR2(25 BYTE)   No               ABANDA                WTAYLOR               107            NONE       
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes              011.44.1343.329268    650.509.4876          107            NONE       
 HIRE_DATE       DATE                No               1987.06.17.00.00.00   2000.04.21.00.00.00   98             NONE       
 JOB_ID          VARCHAR2(10 BYTE)   No               AC_ACCOUNT            ST_MAN                19             NONE       
 SALARY          NUMBER(8,2)         Yes              2100                  24000                 57             NONE       
 COMMISSION_PCT  NUMBER(2,2)         Yes              .1                    .4                    7              NONE       
 MANAGER_ID      NUMBER(6,0)         Yes              100                   205                   18             NONE       
 DEPARTMENT_ID   NUMBER(4,0)         Yes              10                    110                   11             NONE       

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

KLRICE@ORCL>


Those who don't know history are doomed to repeat.


The last thing to be sure to take a look at is the history. History by itself is great however we wanted to show how often the history is uses so there's now "history usage" which will show how many times that history item has been run.  The other thing new here is that instead of timing statements only when user request it, we now time and record that time for everything.  You can access is very simply with "history time" that will show the timings of how long the things in the history took.  The format of the time is HH:MI:SS.fff and we shorten the leading numbers when zero to be able to read and format it easier.






KLRICE@ORCL>history time
....
 20           info s
 21  (00.050) select table_name from user_tables
 22           info+ s
 23           info hr.dept
 24           into scott.emp
 25           info scott.emp
 26           info+ scott.emp
 27           info all_tables
 28           info all_tab_comments
 29  (04.531) select * from all_tables t,all_tab_comments tc where t.owner = tc.owner and t.table_name = tc.table_name
 30  (00.420) select * from all_tables t,all_tab_comments tc  where t.owner = tc.owner (+) and t.table_name = tc.table_name(+) 



While this is 3 really quick things, there's a lot more in there with this new EA. We will blog the other new things more in the coming days.

Wednesday, December 17, 2014

SDSQL - Output Formats

   In SQL Developer, Jeff has shown in many tips & tricks talks and here on his blog how to get data from any select into a few formats.  The most common I would say is csv.   Nothing here has changed. Well, almost.  What will be new is the html format is getting a facelift and a json format is being added neither are in the version of sdsql or sql developer that are on otn yet

    As expected all those "hint" work in sdsql as well.  What's new is there is a "set sqlformat" command so that all results come out in that format. So, all the same formats work : xml, html, delimited, insert, loader, fixed, text. Also there's a new one named ansiconsole which is described here.





  Now with the new things being added it could be easy to create a "build" script of sorts.  Combine this with the new ddl command into something like this

SQL>set sqlformat insert
SQL>spool my_build_script.sql
SQL>ddl emp
SQL>select * from emp;
SQL>apex export 101
SQL>spool off

What you would end up with out of this is a sql script with the ddl and data for emp and the Apex App export.  This script could then be checked in, given to a DBA, or even just for a nightly backup of sorts.