Feed aggregator

Amazon Rekognition – Computer Vision On AWS

Online Apps DBA - Tue, 2021-01-26 23:56

Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also […]

The post Amazon Rekognition – Computer Vision On AWS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Flexible Load Balancer in Oracle Cloud (OCI)

Online Apps DBA - Tue, 2021-01-26 23:39

No more fixed bandwidth Load Balancer! With OCI Flexible Load Balancing service, you are no longer restricted to fixed bandwidth load balancer shapes or scaling based only on the general traffic patterns. You can choose a custom minimum bandwidth of 10 Mbps and an optional maximum bandwidth of 8000 Mbps. Check out this blog at […]

The post Flexible Load Balancer in Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

MicroK8s: kubectl get componentstatus deprecated - etcd status missing

Dietrich Schroff - Tue, 2021-01-26 15:46


 

If you want to check the health of the basic components with

kubectl get componentstatuses 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok       

Then etcd is missing.

This is a problem of a change in the api of kuberentes https://kubernetes.io/docs/setup/release/notes/#deprecation-5


The command to check etcd is:

kubectl get --raw='/readyz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed


Index Hints

Jonathan Lewis - Tue, 2021-01-26 10:28

At the end of the previous post on index hints I mentioned that I had been prompted to complete a draft from a few years back because I’d been sent an email by Kaley Crum showing the optimizer ignoring an index_rs_asc() hint in a very simple query. Here, with some cosmetic changes, is the example he sent me.

rem
rem     Script: index_rs_kaley.sql
rem     Dated:  Dec 2020
rem     Author: Kaley Crum
rem
rem     Last tested
rem             19.3.0.0
rem

create table range_scan_me(
        one,
        letter 
)
compress
nologging
as
with rowgen_cte as (
        select  null
        from    dual
        connect by level <=  11315
)
select
        1 one,
        case 
                when rownum <=  64e5     then 'A'
                when rownum  =  64e5 + 1 then 'B'
                when rownum <= 128e5     then 'C' 
        end     letter
from 
        rowgen_cte a
cross join 
        rowgen_cte b 
where 
        rownum <= 128e5
;

create index one_letter_idx on range_scan_me(one, letter) nologging;

The table has 12.8 million rows. Of the two columns the first always holds the value 1, the second has one row holding the value ‘B’, and 6.4M rows each holding ‘A’ and ‘C’. On my laptop it took about 20 seconds to create the table and 26 seconds to create the index; using a total of roughly 376 MB (29,000 blocks for the index, 18,500 blocks for the (compressed) table).

Since this is running on 19,3 Oracle will have created basic statistics on the table and index as it created them. Significantly, though, the statistics created during data loading do note include histograms so the optimizer will not know that ‘B’ is a special case, all it knows is that there are three possible values for letter.

Time now to query the data:

et serveroutput off
alter session set statistics_level=all;

select 
        /*+ index_rs_asc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last'));

I’ve told the optimizer to use an index range scan, using the “description” method to specify the index I want it to use. The hint is definitely valid, and the index can definitely be used in this way to get the correct result. But here’s the execution plan:

------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.01 |       8 |      4 |
|*  1 |  INDEX SKIP SCAN | ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |      4 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_asc(t1 (one, letter))

The plan gives us two surprises: first it ignores (and reports that it is ignoring) a perfectly valid hint. Secondly it claims to be using an index skip scan even though the common understanding of a skip scan is that it will be used when “the first column of the index doesn’t appear in the where clause”.

We can infer that the plan is truthful because it has taken only 8 buffer visits to get the result – that’s probably a probe down to the (1,’B’) index entry, then another probe to see if the last index leaf block has any entries in it where column one is greater than 1.

But there are a couple of little oddities about this “ignoring the index” line. First, if we hadn’t hinted the query at all it would have done a tablescan, so the “index” bit of the hint is being obeyed even if the “rs” bit isn’t. Then there’s this:

select 
        /*+ index_rs_desc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

-------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |      1 |        |      1 |00:00:00.01 |       8 |
|*  1 |  INDEX SKIP SCAN DESCENDING| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_desc(t1 (one, letter))

If we change the index_rs_asc() to index_rs_desc(), the optimizer still ignores the “range scan” bit of the hint, but honours the “descending” bit – we get an index skip scan descending.

Of course this example is a very extreme case – nevertheless it is a valid example of the optimizer behaving in a way that doesn’t seem very user-friendly. If we add ‘outline’ to the format options for the call to dbms_xplan.display_cursor() we’ll find that the index_ss_asc() and index_ss_desc() hints have been substituted for our attempted index_rs_asc() and index_rs_desc().

So, if we really are confident that an index range scan would work a lot better than an index skip scan what could we do. We could try telling it to use an index (posibly even an index range scan ascending), but not to do an index skip scan. Let’s test that and include the Outline Information in the execution plan:

select 
        /*+ index(t1) no_index_ss(t1) */
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
;


select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last outline'));


---------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.78 |   14290 |
|*  1 |  INDEX RANGE SCAN| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.78 |   14290 |
---------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index(t1)
           -  no_index_ss(t1)

It worked – we can see the index range scan, and we can see in the Buffers column of the plan why an index range scan was a bad idea – it’s taken 14,290 buffer visits to get the right result. If you check the index size I mentioned further up the page (, and think about how I defined the data, you’ll realise that Oracle has started an index range scan at the leaf block holding (1,B’) – which is half way along the index – and then walked every leaf block from there to the end of the index in an attempt to find any index entries with column one greater than 1.

The other thing to notice here is that the hint in the Outline Information is given as:

INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))

This was the hint that appeared in the outline whether I used the index() hint or the index_rs_asc() hint in the query. Similarly, when I tried index_desc() or index_rs_desc() as the hint the outline reported index_desc() in both cases.

If I try adding just this hint to the query the plan goes back to a skip scan. It’s another case where the hints in the Outline Information (hence, possibly, an SQL Plan Baseline) don’t reproduce the plan that the outline claims to be describing.

Summary

Does Oracle ignore hints?

It looks as if the answer is still no, except it seems to think that a skip scan is just a special case of a range scan (and, from the previous article, a range scan is just a special case of a skip scan). So if you want to ensure that Oracle uses your preferred index strategy you may have to think about including various “no_index” hints to block the indexes you don’t want Oracle to use, and then no_index_ss() and no_index_ffs() to make sure it doesn’t use the wrong method for the index you do want to use. Even then you may find you don’t have quite enough options to block every index option that you’d like to block.

Upcoming Webinar: Oracle Database 21c New Security Features

Thursday, January 28, 2021 - 2:00 pm to 3:00 pm EST

Oracle Database 21c has been released and is the latest innovation release of the Oracle Database.  A number of new security features are included with this release and should be adopted when creating new databases or upgrading existing databases. This educational webinar will review the new security features and changes in security for this database release. Key new security features are blockchain table, gradual database password rollover, mandatory case sensitive passwords, and Unified Auditing enhancements.

>>> Register for this webinar <<< 

Oracle Database
Categories: APPS Blogs, Security Blogs

Index Hints

Jonathan Lewis - Mon, 2021-01-25 10:59

I’ve lost count of the number of times I’ve reminded people that hinting (correctly) is hard. Even the humble /*+ index() */ hint and its close relatives are open to misunderstanding and accidental misuse, leading to complaints that “Oracle is ignoring my hint”.

Strange though it may seem, I’m still not 100% certain of what some of the basic index hints are supposed to do, and even the “hint report” in the most recent versions of dbms_xplan.display_xxx() hasn’t told me everything I’d like to know. So if you think you know all about hints and indexing this blog note is for you.

I’ll start with a brief, and approximate, timeline for the basic index hints – starting from 8.0

Version Hint8.0index8.1index_asc, index_desc, index_ffs, no_index9.0index_ss, index_ss_asc, index_ss_desc10.1no_index_ffs, no_index_ss11.1index_rs_asc, index_rs_descSaving these for laterchange_dupkey_error_index, domain_index_filter, domain_index_no_sort, domain_index_sort, ignore_row_on_dupkey_index, index_combine, index_join, index_stats, local_indexes, num_index_keys, parallel_index, use_invisible_indexes, use_nl_with_index, xmlindex_rewrite, xmlindex_rewrite_in_select, xmlindex_sel_idx_tbl

For completeness I’ve included the more exotic index-related hints in the list (without a version), and I’ve even highlighted the rarely seen use_nl_with_index() hint to remind myself to raise a rhetorical question about it at the end of this piece.

In this list you’ll notice that the only hint originally available directed the optimizer to access a table by index, but in 8.1 that changed so that we could

  1. tell the optimizer about indexes it should not use
  2. specify whether the index access should use the index in ascending or descending order
  3. use an index fast full scan.

In 9i Oracle then introduced the index skip scan, with the option to specify whether the skip scan should be in ascending or descending order. The index_ss hint seems to be no more than a synonym for the index_ss_asc hint (or should that be the other way round); ss far as I can tell the index_ss() hint will not produce a descending skip scan.

You’ll note that there’s no hint to block an index skip scan, until the hint no_index_ss() appears in 10g along with the no_index_ffs() hint to block the index fast full scan. Since 10g Oracle has got better at introducing both the “positive” and “negative” versions of a hint whenever it introduces any hints for new optimizer mechanisms.

Finally we get to 11g and if you search MOS you may still be able to find the bug note (4323868.8) that introduced the index_rs_asc() and index_rs_desc() hints for index range scan ascending and descending.

From MOS Doc 4323868.8: “This fix adds new hints to enforce that an index is selected only if a start/stop keys (predicates) are used: INDEX_RS_ASC INDEX_RS_DESC”

This was necessary because by this time the index() hint allowed the optimizer to decide for itself how to use an index and it was quite difficult to force it to use the strategy you really wanted.

It’s still a source of puzzlement to me that an explicit index() hint will sometimes be turned into an index_rs_asc() when you check the Outline Information from a call to dbms_xplan.display_xxx() the Optimizer wants to use to reproduce the plan, while there are other times that an explicit index_rs_asc() hint will be turned into a basic index() hint (which might not reproduce the original plan)!

The Warm-up

Here’s a little surprise that could only reveal itself in the 19c hint report – unless you were willing to read your way carefully through a 10053 (CBO) trace file in earlier versions of Oracle. It comes from a little investigation of the index_ffs() hint that I’ve kept repeating over the last 20 years.

rem
rem     Script:         c_indffs.sql
rem     Dated:          March 2001
rem     Author:         Jonathan Lewis
rem

create table t1
nologging
as
select 
        rownum                  id,
        rpad(mod(rownum,50),10) small_vc,
        rpad('x',50)            padding
from
        all_objects
where
        rownum <= 3000
;

alter table t1 modify id not null;

create index t_i1 on t1(id);
create index t_i2 on t1(small_vc,id);

set autotrace traceonly explain

select 
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index_ffs(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index_ffs(t1) no_index(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

set autotrace off

I’ve created a table with two indexes, and then enabled autotrace to get the execution plans for 4 queries that vary only in their hinting. Here’s the plan (on 19.3, with my settings for system stats) for the first query:

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

It’s an index fast full scan on the t_i2 (two-column) index. If I add an index() hint to this query, will that allow Oracle to continue using the index fast full scan, or will it force Oracle into some other path. Here’s the plan for the query hinted with index(t1):

---------------------------------------------------------------------------------------------
| Id  | Operation                            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |      |     1 |    15 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |      |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1   |   250 |  3750 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T_I1 |   250 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">2750)

The optimizer has chosen an index range scan on the (single-column) t1 index. Since this path costs more than the index fast full scan it would appear that the index() hint does not allow the optimizer to consider an index fast full scan. So we might decide that an index_ffs() hint is appropriate to secure the plan we want – and here’s the plan we get with that hint:

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

As expected we get the index fast full scan we wanted. But we might want to add belts and braces – let’s include a no_index() hint to make sure that the optimizer doesn’t consider any other strategy for using an index. Since we’ve seen that the index() hint isn’t associated with the index fast full scan path it seems reasonable to assume that the no_index() is also not associated with the index fast full scan path. Here’s the plan we get from the final variant of my query with index_ffs(t1) no_index(t1):

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (2))
---------------------------------------------------------------------------
   2 -  SEL$1 / T1@SEL$1
         U -  index_ffs(t1) / hint conflicts with another in sibling query block
         U -  no_index(t1) / hint conflicts with another in sibling query block

The query has produced the execution plan we wanted – but only by accident. The hint report (which, by default, is the version that reports only the erroneus or unused hints) tells us that both hints have been ignored because they each conflict with some other hint in a “sibling” query block. In this case they’re conflicting with each other.

So the plan we get was our original unhinted plan – which made it look as if we’d done exactly the right thing to ensure that we’d made the plan completely reproducible. Such (previously invisible) errors can easily lead to complaints about the optimizer ignoring hints.

The Main Event

The previous section was about an annoying little inconsistency in the way in which the “negative” version of a hint may not correspond exactly to the “postive” version. There’s a more worrying issue to address when you try to be more precise in your use of basic index hints.

We’ve seen that an index() hint could mean almost anything other than an index fast full scan, while a no_index() hint (probably) blocks all possible uses of an index, but would you expect an index_rs_asc() hint to produce a skip scan, or an index_ss_asc() hint to produce a range scan? Here’s another old script of mine to create some data and test some hints:

rem
rem     Script:         skip_scan_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2009
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum  id
        from    all_objects 
        where   rownum <= 3000  -- > hint to avoid wordpress format issue
)
select
        mod(rownum,300)                                 addr_id300,
        mod(rownum,200)                                 addr_id200,
        mod(rownum,100)                                 addr_id100,
        mod(rownum,50)                                  addr_id050,
        trunc(sysdate) + trunc(mod(rownum,2501)/3)      effective_date,
        lpad(rownum,10,'0')                             small_vc,
        rpad('x',050)                                   padding
--      rpad('x',100)                                   padding
from
        generator       v1,
        generator       v2
where
        rownum <= 250000   -- > hint to avoid wordpress format issue
;

create index t1_i1 on t1(effective_date);
create index t1_i300 on t1(addr_id300, effective_date);
create index t1_i200 on t1(addr_id200, effective_date);
create index t1_i100 on t1(addr_id100, effective_date);
create index t1_i050 on t1(addr_id050, effective_date);

I’ve created a table with rather more indexes than I’ll be using. The significant indexes are t1_i1(effective_date), and t1_i050(addr_id050, effective_date). The former will be available for range scans the latter for skip scans when I test queries with predicates only on effective_date.

Choice of execution path can be affected by the system stats, so I need to point out that I’ve set mine with the following code:

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',500);
exception
        when others then null;
end;
/

And I’ll start with a couple of “baseline” queries and execution plans:

explain plan for
select 
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

alter index t1_i1 invisible;

explain plan for
select 
        /*+ index(t1) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

You’ll notice at line 11 I’ve made the t1_i1 index invisible, and it will stay that way for a couple more tests. Here are the first two execution plans:

Unhinted
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  1500 | 28500 |   428   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |  1500 | 28500 |   428   (9)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00',
              'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Hinted with index(t1)
-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index(t1)

Unhinted I’ve managed to rig the data and system stats so that the first path is a full tablescan; then, when I add the generic index(t1) hint Oracle recognises and uses the hint in the best possible way, picking the lowest cost index skip scan.

A variation I won’t show here – if I change the hint to index_rs_asc(t1) the optimizer recognizes there is no (currently visible) index that could be used for an index range scan and does a full tablescan, reporting the hint as unused. It won’t try to substitute a skip scan for a range scan.

What happens if I now try the index_ss(t1) hint without specifying an index. Firstly with the t1_i1 index still invisible, then after making t1_i1 visible again:

explain plan for
select 
        /*+ index_ss(t1) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

Here are the two execution plans, first when t1_i1(effective_date) is still invisible:

-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index_ss(t1)

As you might expect the optimizer has picked the t1_i050 index for a skip scan. (There are 3 other candidates for the skip scan, but since the have more distinct values for their leading column they are all turn out to have a higher cost than t1_i050).

So let’s make the t1_i1 index visible and see what the plan looks like:

----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |  1500 | 28500 |   521   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  1500 | 28500 |   521   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |  1500 |       |     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_ss_asc(t1)

The optimizer picks an index range scan using the t1_i1 index, and reports the hint as unused! For years I told myself that an index skip scan was derived as a small collection of range scans, so an index range was technically a “degenerate” skip scan i.e. one where the “small collection” consisted of exactly one element. Oracle 19c finally told me I was wrong – the optimizer is ignoring the hint.

The fact that it’s a sloppy hint and you could have been more precise is irrelevant – if the optimizer won’t do a skip scan when you specify a range scan it shouldn’t do a range scan when you specify a skip scan (personal opinion).

We should check, of course, that a precisely targeted skip scan hint works before complaining too loudly – would index_ss(t1 t1_i050), or index_ss_t1 t1_i300) work when there’s a competing index that could produce a lower cost range scan? The answer is yes.

explain plan for
select 
        /*+ index_ss(t1 t1_i050) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index_ss(t1 t1_i050)

If you specify a suitable index in the index_ss() hint then the optimizer will use it and won’t switch to the index range scan. You can, of course, specify the index by description rather than name, so the hint /*+ index_ss(t1 (addr_id050, effective_date)) */ would have been equally valid and obeyed.

How much do you know?

I’ll finish off with a rhetorical question, which I’ll introduce with this description take from the 19c SQL Tuning Guide section 9.2.1.6:

The related hint USE_NL_WITH_INDEX(table index) hint instructs the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table. The index is optional. If no index is specified, then the nested loops join uses an index with at least one join predicate as the index key.

An intuitive response to this hint would be to assume that most people expect nested loops to use index unique scans or range scans into the second table. So what would your initial expectation be about the validity of use_nl_with_index() if the only way the index could be used was with an index skip scan, or a full scan, or a fast full scan. What if there were two join predicates and there’s a path which could do a nested loop if it used two indexes to do an index join (index_join()) or an index bitmap conversion (index_combine()). Come to that, how confident are you that the hint will work if the index specified is a bitmap index?

Summary

It’s important to be as accurate and thorough as possible when using hints. Even when a hint is documented you may find that you can asked “what if” questions about the hint and find that the only way to get answers to your questions is to do several experiments.

If you’re going to put hints into production code, take at least a little time to say to yourself:

“I know what I want and expect this hint to do; are there any similar actions that it might also be allowed to trigger, and how could I check if I need to allow for them or block them?”

Footnote: This journey of rediscovery was prompted by an email from Kaley Crum who supplied me with an example of Oracle using an index skip scan when it had been hinted to do an index range scan.

RMAN's CATALOG command

Hemant K Chitale - Mon, 2021-01-25 09:14

The CATALOG START WITH command allows you to update the RMAN Repository with information about backup pieces (or archivelogs) in the specified location.  

For example, if older backups have already been purged from RMAN but are now restored from tape, they can be made visible to RMAN with the CATALOG START WITH command.

Another case would be if you relocate backups  to an alternate filesystem or diskgroup and the RMAN repository needs to updated to identify the new location.

If you copy a backup to another server and then restore the controlfile from a different backup, you can have the controlfile updated with information about the copied backups using this command.

You can also take a backup from a Primary database and catalog it to a Standby (e.g. when you want to update the Standby which is significantly lagging). Oracle also allows you to catalog a backup from a Standby into the Primary server if the backup / backups is/are not available on the Primary.


 A few demonstrations :



Demonstration 1 : Relocated Backup Set / BackupPiece for Datafile Backup(s)

SQL> select file#, name, checkpoint_change#
2 from v$datafile
3 where name = '/opt/oracle/oradata/ORCLCDB/users01.dbf'
4 /

FILE# NAME CHECKPOINT_CHANGE#
---------- ------------------------------------------------ ------------------
7 /opt/oracle/oradata/ORCLCDB/users01.dbf 7583758

SQL>
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Mon Jan 25 22:18:20 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Mon Jan 25 22:18:26 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> list backup of datafile 7;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
42 Full 229.31M DISK 00:00:26 14-NOV-20
BP Key: 42 Status: AVAILABLE Compressed: YES Tag: TAG20201114T162700
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
List of Datafiles in backup set 42
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7343626 14-NOV-20 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
53 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 53 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

RMAN>
-- Datafile 7 is currently at a higher SCN (7583758) then the latest backup as of 25-Jan-21





RMAN> crosscheck backup of datafile 7;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Crosschecked 1 objects

crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
Crosschecked 1 objects


RMAN>
----- both backups are no longer available on disk





oracle19c>pwd
/var/tmp/For_Restore
oracle19c>ls -l
total 318016
-rw-r-----. 1 oracle oinstall 9194496 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
-rw-r-----. 1 oracle oinstall 4457984 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
-rw-r-----. 1 oracle oinstall 2251776 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
-rw-r-----. 1 oracle oinstall 62976 Jan 25 22:15 o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
-rw-r-----. 1 oracle oinstall 240459776 Jan 25 22:14 o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
-rw-r-----. 1 oracle oinstall 69206016 Jan 25 22:14 o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp
oracle19c>
----- backups of 25-Jan have been restored from Tape to /var/tmp/For_Restore





RMAN> catalog start with '/var/tmp/For_Restore';

searching for all files that match the pattern /var/tmp/For_Restore

List of Files Unknown to the Database
=====================================
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp

RMAN>
RMAN> list backup of datafile 7;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
42 Full 229.31M DISK 00:00:26 14-NOV-20
BP Key: 42 Status: EXPIRED Compressed: YES Tag: TAG20201114T162700
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
List of Datafiles in backup set 42
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7343626 14-NOV-20 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

BS Key Type LV Size
------- ---- -- ----------
53 Full 229.31M
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

Backup Set Copy #2 of backup set 53
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:00:26 25-JAN-21 YES TAG20210125T221421

List of Backup Pieces for backup set 53 Copy #2
BP Key Pc# Status Piece Name
------- --- ----------- ----------
64 1 AVAILABLE /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp

Backup Set Copy #1 of backup set 53
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:00:26 25-JAN-21 YES TAG20210125T221421

List of Backup Pieces for backup set 53 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
53 1 EXPIRED /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp

RMAN>
----- Now RMAN finds that there is one more backup in /var/tmp/For_Restore
----- RMAN also identifies that Backup Set 53 actually has 2 copies -- Copy#2 being in /var/tmp/For_Restore
----- The BackupSet is 53 but the BackupPiece is 53 at the FRA location and 64 for the Copy at /var/tmp/For_Restore
----- So, the CATALOG command has added this copy is a new BackupPiece in the Repository





RMAN> crosscheck backup of datafile 7;

using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=64 STAMP=1062800572
Crosschecked 1 objects

crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Crosschecked 2 objects


RMAN> delete expired backup of datafile 7;

using channel ORA_DISK_1
using channel ORA_DISK_2

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
42 42 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
53 53 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp

Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Deleted 1 EXPIRED objects

deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
Deleted 1 EXPIRED objects


RMAN> list backup of datafile 7;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
53 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 64 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

RMAN>
----- after running CROSSCHECK and DELETE EXPIRED, RMAN now identifies that Backupset 53 has only one BackupPiece at /var/tmp/For_Restore
----- Any attempt to RESTORE DATAFILE 7 would now use this BackupPiece




Demonstration 2 : Relocated ArchiveLog and Backup of ArchiveLog

RMAN> list archivelog from  sequence 119 until sequence 119;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
286 1 119 A 25-JAN-21
Name: /opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf


RMAN> list backup of archivelog from sequence 119 until sequence 119;


List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
51 2.15M DISK 00:00:01 25-JAN-21
BP Key: 51 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221418
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp

List of Archived Logs in backup set 51
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 119 7582383 25-JAN-21 7583492 25-JAN-21

RMAN>
RMAN> crosscheck archivelog from sequence 119 until sequence 119;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf RECID=286 STAMP=1062800057
Crosschecked 1 objects


RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;

using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Crosschecked 1 objects


RMAN>
----- The CROSSCHECK command finds that both the ArchiveLog and it's backup are missing





RMAN> catalog start with '/var/tmp/ArchLogs_Restore/';

searching for all files that match the pattern /var/tmp/ArchLogs_Restore/

List of Files Unknown to the Database
=====================================
File Name: /var/tmp/ArchLogs_Restore/1_119_1036108814.dbf
File Name: /var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /var/tmp/ArchLogs_Restore/1_119_1036108814.dbf
File Name: /var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp

RMAN> crosscheck archivelog from sequence 119 until sequence 119;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
validation succeeded for archived log
archived log file name=/var/tmp/ArchLogs_Restore/1_119_1036108814.dbf RECID=299 STAMP=1062801628
Crosschecked 1 objects

validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf RECID=286 STAMP=1062800057
Crosschecked 1 objects


RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;

using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Crosschecked 1 objects

crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=66 STAMP=1062801628
Crosschecked 1 objects


RMAN>
RMAN> delete expired backup of archivelog from sequence 119 until sequence 119;

using channel ORA_DISK_1
using channel ORA_DISK_2

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
51 51 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp

Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Deleted 1 EXPIRED objects


RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;

using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=66 STAMP=1062801628
Crosschecked 1 objects


RMAN>
----- After I CROSSCHECK in the new (restored) location, RMAN finds the ArchiveLog and it's backup
----- I can DELETE the EXPIRED backup
----- (note that the missing ArchiveLog /opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf is no longer listed as the CROSSCHECK had already marked it as "validation failed")



Demonstration 3 : Datafile Backup from Standby available at Primary

----- Backup of Datafile 7 taken at the Standby
RMAN> backup as compressed backupset datafile 7 format '/var/tmp/For_Primary/datafile_7.bkp';

Starting backup at 25-JAN-21
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=264 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/opt/oracle/oradata/STDBYDB/users01.dbf
channel ORA_DISK_1: starting piece 1 at 25-JAN-21
channel ORA_DISK_1: finished piece 1 at 25-JAN-21
piece handle=/var/tmp/For_Primary/datafile_7.bkp tag=TAG20210125T225828 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 25-JAN-21

Starting Control File and SPFILE Autobackup at 25-JAN-21
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2021_01_25/o1_mf_s_1062802630_j0xq4pmm_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 25-JAN-21

RMAN>
----- The backup is then copied over to the Primary Server





RMAN> catalog start with '/var/tmp/From_Standby/';
RMAN> catalog start with '/var/tmp/From_Standby/';

searching for all files that match the pattern /var/tmp/From_Standby/

List of Files Unknown to the Database
=====================================
File Name: /var/tmp/From_Standby/datafile_7.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /var/tmp/From_Standby/datafile_7.bkp

RMAN> list backup of datafile 7;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
60 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 70 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 60
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
62 Full 1.18M DISK 00:00:00 25-JAN-21
BP Key: 73 Status: AVAILABLE Compressed: YES Tag: TAG20210125T225828
Piece Name: /var/tmp/From_Standby/datafile_7.bkp
List of Datafiles in backup set 62
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7591636 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf

RMAN>
----- The Primary now recognises that there are 2 distinct backups of datafile 7
----- That in /var/tmp/For_Restore is as of CheckPoint SCN 7583529 (it has a new BS Key and BackupPiece as I have deleted and re-cataloged it for this, third, demo)
----- The one from the Standby at /var/tmp/From_Standby is at CheckPoint SCN 7591636 -- which is a higher SCN as it is a more recent backup
----- I can actualy use the backup from th Standby and Restore to the Primary





RMAN> sql 'alter database datafile 7 offline';

sql statement: alter database datafile 7 offline

RMAN> restore datafile 7;

Starting restore at 25-JAN-21
using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /opt/oracle/oradata/ORCLCDB/users01.dbf
channel ORA_DISK_1: reading from backup piece /var/tmp/From_Standby/datafile_7.bkp
channel ORA_DISK_1: piece handle=/var/tmp/From_Standby/datafile_7.bkp tag=TAG20210125T225828
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 25-JAN-21

RMAN> sql 'alter database datafile 7 online';

sql statement: alter database datafile 7 online
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 01/25/2021 23:02:55
RMAN-11003: failure during parse/execution of SQL statement: alter database datafile 7 online
ORA-01113: file 7 needs media recovery
ORA-01110: data file 7: '/opt/oracle/oradata/ORCLCDB/users01.dbf'

RMAN> recover datafile 7;

Starting recover at 25-JAN-21
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 25-JAN-21

RMAN> sql 'alter database datafile 7 online';

sql statement: alter database datafile 7 online

RMAN>
----- So, when datafile 7 is corrupt at the Primary, I take it OFFLINE and then issue a RESTORE command
----- RMAN automatically identifies that, of the two backups, the "From_Standby/datafile_7.bkp' is more recent
----- So, the Backup from the Standby can be Restored to the Primary and the datafile brought ONLINE
----- RECOVERy is still required because the Primary database is currently at a higher SCN than the backup of that datafile from the Standby
----- So, the RECOVER command applies all Redo that is for SCN higher than 7591636 that needs to be applied to Datafile 7
----- For the duration when I had datafile 7 OFFLINE I had stopped Database Recovery at the Standby


Thus, there are different uses for the CATALOG START WITH command in RMAN
(what I haven't demonstrated here is restoring a Full Database -- either on the same server or to another server, when the BackupPieces are at an alternate locatoin).


Categories: DBA Blogs

Amazon Comprehend | Natural Language Processing (NLP) On AWS

Online Apps DBA - Mon, 2021-01-25 02:12

AWS uses Amazon Comprehend for natural language processing (NLP) tasks. It uses ML to find insights and relationships in a text. To work on Amazon Comprehend, no machine learning experience required. Natural Language Processing (NLP) is an approach for computers to understand, analyze, and extract meaning from textual data in a smart and useful way. […]

The post Amazon Comprehend | Natural Language Processing (NLP) On AWS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction To Deep Learning On AWS

Online Apps DBA - Mon, 2021-01-25 02:04

Nowadays Machine Learning and Artificial Intelligence gaining a lot of buzzes. But have you noted about AWS deep learning? Deep learning is also a developing field that is turning many heads in the current business scene. AWS has carried another point to deep learning with Amazon Machine Images (AMIs) explicitly implied for AI. Deep learning […]

The post Introduction To Deep Learning On AWS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Microk8s: publishing the dashboard (reachable from remote/internet)

Dietrich Schroff - Sat, 2021-01-23 15:22

 

If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard

The problem is, the command

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

has to be reexecuted every time you restart your node, which you use to access the dashboard.

A better configuration can be done this way: Run the following command and change 

type: ClusterIP -->   type: NodePort

kubectl -n kube-system edit service kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-01-22T21:19:24Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3599"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
  clusterIP: 10.152.183.81
  clusterIPs:
  - 10.152.183.81
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
Then run

root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.152.183.81   <none>        443:30713/TCP   4m14s

After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713

 

 

Eleven Table Tennis: Basics

The Oracle Instructor - Sat, 2021-01-23 08:05

Assuming you are an IRL player who wants to get as close to the real thing as possible, that’s what I’d recommend:

Make sure you have enough space to play

The green box is your playing space. It should be a square of 2.50 m X 2.50 m ideally. Make sure to leave some space at the front, so you can reach balls close to the net and even a little across the net. Otherwise you may become a victim of ghost serves. Leave enough room at the sides – some opponents play angled, just like IRL.

If you don’t have enough space for this setup – maybe you shouldn’t play multiplayer mode then. You can still have fun, playing against the ballmachine or against the AI. Actually, I think it’s worth the money even in that case.

Use the discord channel

The Eleven TT community is on this discord channel: https://discord.gg/s8EbXWG

I recommend you register there and use the same or a similar name as the name you have in the game. For example, I’m Uwe on discord and uwe. in the game (because the name uwe was already taken). This is handy to get advice from more experienced players, also the game developers are there. They are very responsive and keen to improve Eleven TT even more, according to your feedback.

There’s a preview version presently, that has improved tracking functionality. You can just ask the developers there to get you this preview version. I did, and I find it better than the regular version, especially for fast forehand strokes.

Setup your paddle

When you have the Sanlaki paddle adapter (as recommended in the previous post), go to the menu and then to Paddle Settings:

Click on Paddle Position and select the Sanlaki Adapter:

As an IRL player, you may start with an Advanced Paddle Surface:

Se how that works for you. Bounciness translates to the speed of your blade. An OFF ++ blade would be maximum bounciness. Spin is self-explaining. You have no tackiness attribute, though. Throw Coefficient translates to the sponge thickness. The higher that value, the thicker the sponge.

Serving

This takes some time to get used to. You need to press the trigger on the left controller to first “produce” a ball, then you throw it up and press the trigger again to release the ball. Took me a while to practice that and still sometimes I fail to release the ball as smoothly as I would like to.

What I like very much: You have a built-in arbiter, who makes sure your serve is legal according to the ITTF rules. That is applied for matches in multiplayer mode as well as for matches in single player mode. But not in free hit mode! Check out the Serve Practice:

It tells you what went wrong in case:

Remove AI Spin Lock

I recommend you practice with the AI opponent in single player mode for a while. It has spin lock on per default, which means it will never produce any side spin. I find that unrealistic. After some practicing against the AI in single player mode, you’re ready for matches in multiplayer mode against other human opponents.

Categories: DBA Blogs

Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster

Dietrich Schroff - Fri, 2021-01-22 15:12

 Kubernetes cluster with microk8s on raspberry pi

If you want to join a node and you get the following error:

microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54

Contacting cluster at 192.168.178.57
Traceback (most recent call last):
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
    join_dqlite(connection_parts)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
    update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
    with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'

 This error happens, if you have not enabled dns on your nodes.

So just run "microk8s.enable dns" on every machine:

microk8s.enable dns

Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled

And after that the join will work like expected:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
ubuntu   Ready    <none>   3m35s   v1.20.1-34+97978f80232b01
zigbee   Ready    <none>   37m     v1.20.1-34+97978f80232b01
 

Google Cloud Services and Tools

Online Apps DBA - Fri, 2021-01-22 04:59

Google Cloud Services is a set of Computing, Networking, Storage, Big Data, Machine Learning, and Management services offered by Google which runs on the same cloud infrastructure that Google uses internally for YouTube, Gmail, and other end-user products. Want to know more about the tools and services offered by Google Cloud? Read the blog post […]

The post Google Cloud Services and Tools appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction To Amazon Lex | Conversational AI for Chatbots

Online Apps DBA - Fri, 2021-01-22 04:08

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging […]

The post Introduction To Amazon Lex | Conversational AI for Chatbots appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction To Amazon SageMaker Built-in Algorithms

Online Apps DBA - Fri, 2021-01-22 04:01

Amazon SageMaker provides a suite of built-in algorithms to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. Want to know more about the Amazon SageMaker Built-in Algorithms? Read the blog post at https://k21academy.com/awsml12 to learn more. The blog post covers: • What Is Amazon SageMaker and […]

The post Introduction To Amazon SageMaker Built-in Algorithms appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 2)

Oracle Cloud Infrastructure is a set of complementary cloud services that enable you to build and run a wide range of applications and services in a highly available hosted environment. Oracle Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcing SLOB 2.5.3

Kevin Closson - Thu, 2021-01-21 15:08

This is just a quick blog post to inform readers that SLOB 2.5.3 is now available at the following webpage: click here.

SLOB 2.5.3 is a bug fix release. One of the fixed bugs has to do with how SLOB sessions get connected to RAC instances. SLOB users can surely connect to the SCAN service but for more repeatable testing I advise SLOB 2.5.3 and SQL*Net services configured one per RAC node. This manner of connectivity establishes affinity between schemas and RAC nodes. For example, repeatability is improved if sessions performing SLOB Operations against, say, user7’s schema, it is beneficial to do so connected to the same RAC node as you iterate through your testing.

The following is cut and pasted from SLOB/misc/sql_net/README:

The tnsnames.ora in this directory offers an example of
service names that will allow the user to test RAC with
repeatable results. Connecting SLOB sessions to the round
robin SCAN listener will result in SLOB sessions connecting
to random RAC nodes. This is acceptable but not optimal and
can result in varying run results due to slight variations
in sessions per RAC node from one test to another.
As of SLOB 2.5.3, runit.sh uses the SQLNET_SERVICE_BASE and
SQLNET_SERVICE_MAX slob.conf parameters to sequentially
affinity SLOB threads (Oracle sessions) to numbered service
names. For example:
SQLNET_SERVICE_BASE=rac
SQLNET_SERVICE_MAX=8
With these assigned values, runit.sh will connect the first
SLOB thread to rac1 then rac2 and so forth until rac8 after
which the connection rotor loops back to rac1. This manner
of RAC affinity testing requires either a single SLOB
schema (see SLOB Single Schema Model in the documentaion)
or 8 SLOB schemas to align properly with the value assigned
to SQLNET_SERVICE_MAX. The following command will connect
32 SLOB threads (Oracle sessions) to each RAC node in an
8-node RAC configuration given the tnsnames.ora example
file in this directory:
$ sh ./runit.sh -s 8 -t 32

Find sku_no values from the table which does not have any records for ven_type='P'

Tom Kyte - Thu, 2021-01-21 09:26
Hi Tom and Team, I have a very simple table with the following records. site_no, sku_no, vendor_id, ven_type A sku_no can have multiple vendor_id values associated to it. A vendor_id can have a ven_type = P (Primary) or S (Secondary) I want to find all those sku_no values from the table that do not have any ven_type='P' records. Kindly help me write a query for this. Please let me know if the explanation is not clear or if you want me to provide any other details. Thank you.
Categories: DBA Blogs

Troubleshooting heavy hash joins

Tom Kyte - Thu, 2021-01-21 09:26
Hello Chris, Hello Connor. I know that hash join performance might suffer a lot if it has to store the hash table on a disk, but I have no idea how to check if that's happening and that is the reason a query is slow. Admittedly I do not know much about memory usage in Oracle and how to track it. Could you please, point me toward some articles on the topic or data dictionaries, that would help? Unfortunately I have neither a dba access nor an access to trace files.
Categories: DBA Blogs

Spooling data to .csv file via SQL Plus

Tom Kyte - Thu, 2021-01-21 09:26
It is probably apparent from my job title that my role is far from a dba. I am, however, a frequent user of Oracle SQL Developer. I work in mass appraisal and use SQL Developer to select, update, insert, delete, etc., on a regular basis. I would like to bring some automation to some of the select statements that I run most frequently and spool the results to a .csv file. An example of this is a select statement for identifying recently sold properties that need to be reviewed. The following command [using the Windows Command Prompt] has been stored as a scheduled task [using Windows Task Scheduler]: G:\>sqlplus username/password@connection @G:\SALES_VALIDATION\bat_files\weekly_salesval_rev.sql Weekly_salesval_rev.sql is the script containing the spool commands and select statement. I have included a link so you can view the script. What command [or commands] can I incorporate so that the data will be formatted appropriately [and include column headings] for review in Excel? While there should be approximately 21 columns of data, the results are currently displaying in a scattered fashion within the first 3 columns of the .csv file. If you need any other detail, please let me know. Also, if you would suggest or recommend other approaches to automating frequently run SQL select statements, let me know and I would be glad to look into those alternatives. Thank you for your time and help! Berkley Rose Product Versions: SQL Plus Release 12.2.0.1.0 Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 Oracle SQL Developer 4.0.3.16 Windows Version 10.0.17763.1577
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator