I frequently use apex_collections in forms and reports to enjoy there performance and to support special cases. Apex_Collections can be created and accessed using SQL Select statements. This is very nice, but until now I had to use the API calls to perform inserts, updates and deletes. In case where I use an Interactive Grid, i had to provide PL/SQL code to support SAVE operations in the Grid.
My first try to wrap apex_collections with a view, that supports an "instead of update" trigger failed with a runtime error message from oracle. "ORA-20987: APEX - ORA-01031: insufficient privileges"
Recently i listend to Steven Feuerstein lesson about pipelined table functions:
Getting Started with PL/SQL Table Functions: Pipelined Table Functions
Then I saw an opportunity to encapsulate the fact that I want to access these Apex_Collections for update. I wrote the package Pipe_Apex_Collections with the function pipe_rows. Finally, using the function as a source in the view definition, i created a view that supports DML operations.
Here is the source code for the view definition : v_apex_collections.sql
as part of this plugin and demo app : strack-software-upload-to-collection-plugin
and the live demonstration of this app: https://apex.oracle.com/pls/apex/f?p=103003:LOGIN_DESKTOP
Regards,
Dirk
Im a freelancer and I love to build oracle APEX applications since 2010. The opinions and interests expressed on this blogs are my own. There is no explicit or implicit endorsement of any kind company.
Sonntag, 24. Februar 2019
Samstag, 24. März 2018
Independent Application Schema for APEX Applications
If you have to assign an independent scheme to an APEX application, it would be advantageous if the application does not need to be changed. If no fixed schema names were used in the application,
a generated schema can be easily assigned to the application.
The following query lists statements for objects of a schema that can be created as synonyms or views in a new schema.
Tables and views are mapped to views with identical names. This allows APEX programs to check the column names and types. The list must still be filtered to the actually required objects to keep the privileges minimal.
this query on catalog views generates the required statements to publish a database schema in a second schema. This is the result of my own research:
--------------------------------------------------------------------------------
a generated schema can be easily assigned to the application.
The following query lists statements for objects of a schema that can be created as synonyms or views in a new schema.
Tables and views are mapped to views with identical names. This allows APEX programs to check the column names and types. The list must still be filtered to the actually required objects to keep the privileges minimal.
this query on catalog views generates the required statements to publish a database schema in a second schema. This is the result of my own research:
--------------------------------------------------------------------------------
WITH PA AS ( -- Parameters
SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '_APP' GRANTEE,
SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') OWNER
FROM DUAL
),
PKEY_Q AS (
-- Add primary key constraints to views. This will enable to APEX Builder to detect the Primary key for forms and reports
SELECT
C.TABLE_NAME, C.OWNER TABLE_OWNER,
', CONSTRAINT ' || Dbms_Assert.Enquote_Name(C.CONSTRAINT_NAME) || ' PRIMARY KEY ('
|| LISTAGG(Dbms_Assert.Enquote_Name(C.COLUMN_NAME), ',') WITHIN GROUP (ORDER BY C.POSITION)
|| ') RELY DISABLE' PKEY_CONS
FROM SYS.USER_CONSTRAINTS B
JOIN SYS.USER_CONS_COLUMNS C ON C.TABLE_NAME = B.TABLE_NAME AND C.CONSTRAINT_NAME = B.CONSTRAINT_NAME AND C.OWNER = B.OWNER
WHERE B.CONSTRAINT_TYPE = 'P'
AND B.OWNER = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')
AND B.TABLE_NAME NOT LIKE 'DR$%$_' -- skip fulltext index
AND B.TABLE_NAME NOT LIKE 'BIN$%' -- this table is in the recyclebin
GROUP BY C.TABLE_NAME, C.OWNER, C.CONSTRAINT_NAME
),
COLS_Q AS (
-- List of Column alias names for views
SELECT TABLE_NAME, LISTAGG(Dbms_Assert.Enquote_Name(COLUMN_NAME), ', ') WITHIN GROUP (ORDER BY COLUMN_ID) TAB_COLUMNS
FROM SYS.USER_TAB_COLUMNS
GROUP BY TABLE_NAME
),
STATS_Q AS (
-- views
-------------------------
SELECT 'GRANT ' || PRIVS || ' ON ' || S.OWNER || '.' || S.VIEW_NAME || ' TO ' || PA.GRANTEE GRANT_STAT,
'CREATE OR REPLACE VIEW ' || PA.GRANTEE || '.' || S.VIEW_NAME
|| ' (' || S.TAB_COLUMNS || S.PKEY_CONS || ')'
|| ' AS SELECT * FROM ' || S.OWNER || '.' || S.VIEW_NAME CREATE_STAT,
'REVOKE ' || PRIVS || ' ON ' || S.OWNER || '.' || S.VIEW_NAME || ' FROM ' || PA.GRANTEE REVOKE_STAT,
'DROP VIEW ' || PA.GRANTEE || '.' || S.VIEW_NAME DROP_STAT,
S.OWNER, S.VIEW_NAME OBJECT_NAME,
'VIEW' OBJECT_TYPE,
'VIEW' DEST_OBJECT_TYPE,
ADMIN_GRANT_STAT
FROM (
SELECT CASE WHEN DG.FOREIGN_DEPS_CNT > 0
THEN NVL(DG.PRIVILEGE, 'READ')
ELSE
'SELECT' ||
CASE WHEN TR.PRIVS IS NOT NULL
THEN ', ' || TR.PRIVS
ELSE T.UPDATABLE || T.INSERTABLE || T.DELETABLE
END
END PRIVS,
NG.ADMIN_GRANT_STAT,
T.VIEW_NAME, T.OWNER,
P.PKEY_CONS, C.TAB_COLUMNS
FROM (
SELECT V.VIEW_NAME, T.OWNER,
MAX(CASE WHEN UPDATABLE = 'YES' THEN ', UPDATE' END) UPDATABLE,
MAX(CASE WHEN INSERTABLE = 'YES' THEN ', INSERT' END) INSERTABLE,
MAX(CASE WHEN DELETABLE = 'YES' THEN ', DELETE' END) DELETABLE
FROM SYS.USER_UPDATABLE_COLUMNS T
JOIN SYS.USER_VIEWS V ON V.VIEW_NAME = T.TABLE_NAME
GROUP BY V.VIEW_NAME, T.OWNER
) T LEFT OUTER JOIN ( -- grantable dependent object privileges
-- when a view is accessing other foreign schema view, then is view has READ-only access
SELECT NAME, COUNT(*) FOREIGN_DEPS_CNT,
LISTAGG(PRIVILEGE, ', ') WITHIN GROUP (ORDER BY PRIVILEGE) PRIVILEGE
FROM (
SELECT DISTINCT D.NAME, PRI.PRIVILEGE
FROM SYS.USER_DEPENDENCIES D
JOIN SYS.ALL_TAB_PRIVS PRI
ON PRI.table_Schema = D.REFERENCED_OWNER
AND PRI.table_Name = D.REFERENCED_NAME
AND PRI.grantee IN ('PUBLIC', SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'))
WHERE D.TYPE = 'VIEW'
AND D.REFERENCED_TYPE IN ('VIEW', 'TABLE', 'SYNONYM')
AND D.REFERENCED_OWNER NOT IN ('PUBLIC', SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')) -- dependent on object in foreign schema
AND PRI.GRANTABLE = 'YES'
)
GROUP BY NAME
) DG ON T.VIEW_NAME = DG.NAME
LEFT OUTER JOIN ( -- not grantable dependent object privileges
SELECT NAME, COUNT(*) FOREIGN_DEPS_CNT,
LISTAGG(CASE WHEN NOT_GRANTABLE > 0
THEN 'GRANT ' || PRIVILEGE || ' ON ' || REFERENCED_OWNER || '.' || REFERENCED_NAME
|| ' TO ' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || ' WITH GRANT OPTION;'
END, chr(10)) WITHIN GROUP (ORDER BY PRIVILEGE) ADMIN_GRANT_STAT
FROM (
SELECT D.NAME, D.REFERENCED_OWNER, D.REFERENCED_NAME,
LISTAGG(PRI.PRIVILEGE, ', ') WITHIN GROUP (ORDER BY PRIVILEGE) PRIVILEGE,
COUNT(*) NOT_GRANTABLE
FROM SYS.USER_DEPENDENCIES D
JOIN SYS.ALL_TAB_PRIVS PRI
ON PRI.table_Schema = D.REFERENCED_OWNER
AND PRI.table_Name = D.REFERENCED_NAME
AND PRI.grantee IN ('PUBLIC', SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'))
WHERE D.TYPE = 'VIEW'
AND D.REFERENCED_TYPE IN ('VIEW', 'TABLE', 'SYNONYM')
AND D.REFERENCED_OWNER NOT IN ('PUBLIC', SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')) -- dependent on object in foreign schema
AND PRI.GRANTABLE = 'NO'
GROUP BY D.NAME, D.REFERENCED_OWNER, D.REFERENCED_NAME
)
GROUP BY NAME
) NG ON T.VIEW_NAME = NG.NAME
LEFT OUTER JOIN ( -- updatable views with INSTEAD OF trigger
SELECT REGEXP_REPLACE(
LISTAGG(TRIGGERING_EVENT, ', ') WITHIN GROUP (ORDER BY TRIGGERING_EVENT),
'\sOR\s', ', ') PRIVS,
T.TABLE_NAME VIEW_NAME
FROM SYS.USER_TRIGGERS T
WHERE TRIGGER_TYPE = 'INSTEAD OF'
AND BASE_OBJECT_TYPE = 'VIEW'
GROUP BY TABLE_NAME
) TR ON T.VIEW_NAME = TR.VIEW_NAME
LEFT OUTER JOIN PKEY_Q P ON T.VIEW_NAME = P.TABLE_NAME
LEFT OUTER JOIN COLS_Q C ON T.VIEW_NAME = C.TABLE_NAME
) S, PA
WHERE PA.GRANTEE IS NOT NULL
UNION ALL
-- normal tables
----------------
SELECT
'GRANT ' || PRIVS || ' ON '
|| PA.OWNER || '.' || TABLE_NAME || ' TO ' || PA.GRANTEE GRANT_STAT,
'CREATE OR REPLACE VIEW ' || PA.GRANTEE || '.' || T.TABLE_NAME
|| ' (' || T.TAB_COLUMNS || T.PKEY_CONS || ')'
|| ' AS SELECT * FROM ' || PA.OWNER || '.' || T.TABLE_NAME CREATE_STAT,
'REVOKE ' || PRIVS || ' ON '
|| PA.OWNER || '.' || TABLE_NAME || ' FROM ' || PA.GRANTEE REVOKE_STAT,
'DROP VIEW ' || PA.GRANTEE || '.' || T.TABLE_NAME DROP_STAT,
PA.OWNER, T.TABLE_NAME OBJECT_NAME,
'TABLE' OBJECT_TYPE,
'VIEW' DEST_OBJECT_TYPE,
'' ADMIN_GRANT_STAT
FROM (
SELECT T.TABLE_NAME,
CASE WHEN READ_ONLY = 'NO'
AND NOT EXISTS ( -- when this table is part of materialized view, then only SELECT allowed
SELECT 1
FROM USER_OBJECTS MV
WHERE MV.OBJECT_NAME = T.TABLE_NAME
AND MV.OBJECT_TYPE = 'MATERIALIZED VIEW'
) THEN 'SELECT, UPDATE, INSERT, DELETE '
ELSE 'SELECT '
END PRIVS,
P.PKEY_CONS, C.TAB_COLUMNS
FROM SYS.USER_TABLES T
LEFT OUTER JOIN PKEY_Q P ON T.TABLE_NAME = P.TABLE_NAME
LEFT OUTER JOIN COLS_Q C ON T.TABLE_NAME = C.TABLE_NAME
WHERE T.IOT_NAME IS NULL -- skip overflow tables of index organized tables
AND T.TABLE_NAME NOT LIKE 'DR$%$_' -- skip fulltext index
) T, PA
WHERE PA.GRANTEE IS NOT NULL
AND NOT EXISTS ( -- this table is not part of materialized view log
SELECT --+ NO_UNNEST
1
FROM SYS.ALL_MVIEW_LOGS MV
WHERE MV.LOG_TABLE = T.TABLE_NAME
AND MV.LOG_OWNER = PA.OWNER
)
UNION ALL
-- Function, Procedure, Package, Type
-------------------------------
SELECT 'GRANT EXECUTE ON ' || PA.OWNER || '.' || T.OBJECT_NAME || ' TO ' || PA.GRANTEE GRANT_STAT,
'CREATE OR REPLACE SYNONYM ' || PA.GRANTEE || '.' || OBJECT_NAME
|| ' FOR ' || PA.OWNER || '.' || OBJECT_NAME CREATE_STAT,
'REVOKE EXECUTE ON ' || PA.OWNER || '.' || T.OBJECT_NAME || ' FROM ' || PA.GRANTEE REVOKE_STAT,
'DROP SYNONYM ' || PA.GRANTEE || '.' || OBJECT_NAME DROP_STAT,
PA.OWNER, T.OBJECT_NAME,
T.OBJECT_TYPE,
'SYNONYM' DEST_OBJECT_TYPE,
'' ADMIN_GRANT_STAT
FROM SYS.USER_OBJECTS T, PA
WHERE PA.GRANTEE IS NOT NULL
AND T.OBJECT_TYPE IN ('FUNCTION','PROCEDURE','PACKAGE','TYPE')
UNION ALL -- Sequences
SELECT 'GRANT SELECT ON ' || PA.OWNER || '.' || T.SEQUENCE_NAME || ' TO ' || PA.GRANTEE GRANT_STAT,
'CREATE OR REPLACE SYNONYM ' || PA.GRANTEE || '.' || SEQUENCE_NAME
|| ' FOR ' || PA.OWNER || '.' || SEQUENCE_NAME CREATE_STAT,
'REVOKE SELECT ON ' || PA.OWNER || '.' || T.SEQUENCE_NAME || ' FROM ' || PA.GRANTEE REVOKE_STAT,
'DROP SYNONYM ' || PA.GRANTEE || '.' || SEQUENCE_NAME DROP_STAT,
PA.OWNER, T.SEQUENCE_NAME OBJECT_NAME,
'SEQUENCE' OBJECT_TYPE,
'SYNONYM' DEST_OBJECT_TYPE,
'' ADMIN_GRANT_STAT
FROM SYS.USER_SEQUENCES T, PA
WHERE PA.GRANTEE IS NOT NULL
)
SELECT S.OWNER, S.OBJECT_NAME, S.OBJECT_TYPE, S.DEST_OBJECT_TYPE,
S.ADMIN_GRANT_STAT,
case when EXISTS (
SELECT 1
FROM SYS.ALL_DEPENDENCIES D
WHERE D.OWNER = PA.GRANTEE
AND D.NAME = S.OBJECT_NAME
and D.REFERENCED_OWNER= S.OWNER
and D.REFERENCED_NAME = S.OBJECT_NAME
AND D.REFERENCED_TYPE = S.OBJECT_TYPE
) or EXISTS (
SELECT 1
FROM SYS.ALL_OBJECTS OBJ
WHERE OBJ.OWNER = PA.GRANTEE
AND OBJ.OBJECT_NAME = S.OBJECT_NAME
AND OBJ.OBJECT_TYPE = S.DEST_OBJECT_TYPE
) then 'Y' else 'N'
end DEST_OBJECT_EXISTS,
case when EXISTS (
SELECT 1
FROM SYS.ALL_SYNONYMS SYN
WHERE SYN.SYNONYM_NAME = S.OBJECT_NAME
AND SYN.OWNER = 'PUBLIC'
) then 'Y' else 'N'
end CONFLICTING_OBJECT_EXISTS,
PA.GRANTEE DEST_SCHEMA,
S.GRANT_STAT, S.CREATE_STAT, S.REVOKE_STAT, S.DROP_STAT
FROM STATS_Q S, PA;
Rich Hickeys - 'Database as a Value' concept & Oracle 12c 'Flashback Archive'
For a long time, I used triggers when creating Oracle applications with input protocols.
I often had the same performance problem as Barbara in the following article:
http://www.oracle.com/technetwork/issue-archive/2016/16-mar/o26performance-2925662.html
-----------
The better way leads use to use flashback archive as a table property.
The most important new features are:
* Own logging can be omitted because every change is available as a chronological log.
* A chronological protocol helps to identify unnecessary program steps (for example, delete and re-create rather than merge) and thus optimize.
* For debugging, the state of the database can be retrieved before a program crashes.
* Synchronization jobs that regularly process last updates only need to remember the last call time
to find the last updates on the next call.
* Accounting rules can be adhered to.
- The assignment of data entry to a responsible person and time.
- Entries are not overwritten but automatically canceled and entered again
and that is not the quality of any program. It can not be bypassed.
-----------
I'm glad that these flashback methods in Oracle 12c are now finally available and that they could easily be used. If the FLASHBACK ARCHIVE and ROW ARCHIVAL properties are set for all normal tables in a schema (for recording time) and dbms_flashback_archive.set_context_level has been set (for recording context inc. responsible persons), then an Immutable Database is created with completely new, really wonderful features. When points in time is available as an additional dimension to all objects, then objects become facts and the "Epochal Time Model" can be used.
Quick learning:
(Rich Hickey: Database as a Value)
https://youtu.be/EKdV1IgAaFc?t=12m13s
Minute 12 to 16
Famous applications that make use of this model are the Mac OS Time Machine from Apple and Source Tree for git.
----
and for Oracle see:
https://youtu.be/ZU9OnoHhbNM?t=26m15s
--- Flashback Data Archive ---
* Now available in all licensed editions
* Replaces the trigger approach to populate history tables
* Designate a tablespace to hold history data along with the retention period for the data
+ Specify FLASHBACK ARCHIVE for tables ore ALTER TABLE
* Flashback data can be used
* to look at historical data at a point in time
* to correct data committed to a base table
* Supported with most of the common DDL commands against the source table
--- In-Database Archiving ---
* Enhancement to Flashback Data Archive
* Mark table rows as invisible (logical delete)
* ALTER TABLE ... ROW ARCHIVAL
----
Relevant information from Connor McDonalds article in excerpts:
discover Oracle Flashback’s Oracle Flashback Data Archive feature:
“Flashback Data Archive provides the ability to track and store all transactional
changes to a table over its lifetime. It is no longer necessary to build
this intelligence into your application. A Flashback Data Archive is useful for
compliance with record stage policies and audit reports.”
... a better way.:
First she allocates some storage to hold her archive data:
SQL> create tablespace SPACE_FOR_ARCHIVE
2 datafile 'C:\ORACLE\DB12\ARCH_SPACE.DBF'
3 size 50G;
Tablespace created.
Next, she creates a new structure, called a flashback archive, which defines the retention duration of the row change history for any table placed into it.
SQL> CREATE FLASHBACK ARCHIVE longterm
2 TABLESPACE space_for_archive
3 RETENTION 1 YEAR;
Flashback archive created
And then she simply associates her HR tables with the flashback archive.
SQL> ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE LONGTERM;
Table altered.
Code Listing 8: Execution plan for tables associated with Flashback Data Archive
SQL> set autotrace traceonly explain
SQL> select * from EMPLOYEES AS OF TIMESTAMP SYSDATE-3;
Code Listing 9: Flashback Version Query
SQL> select EMPLOYEE_ID, FIRST_NAME, JOB_ID, VACATION_BALANCE,
2 VERSIONS_STARTTIME TS,
3 nvl(VERSIONS_OPERATION,'I') OP
4 from EMPLOYEES
5 versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
6 where EMPLOYEE_ID = 100
7 order by EMPLOYEE_ID, ts;
Enable the capture of context-level information automatically and without changes to the base tables.
Barbara contacts her administrator and requests context-level tracking to be enabled for Flashback Data Archive:
SQL> exec dbms_flashback_archive.set_context_level('ALL');
Code Listing 10: Flashback Version Query with context information
SQL> select EMPLOYEE_ID, FIRST_NAME, VACATION_BALANCE,
2 VERSIONS_STARTTIME TS,
3 dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','SESSION_USER') AS who,
4 dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','MODULE') AS program
5 from EMPLOYEES
6 versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
7 where EMPLOYEE_ID = 100
8 order by EMPLOYEE_ID, ts;
-------------------
Oracle Flashback Version Query - retrieve metadata and historical data
for a specific time interval. You can view all the rows of a table that
ever existed during a given time interval. Metadata about the different
versions of rows includes start and end time, type of change operation,
and identity of the transaction that created the row version. You use
the VERSIONS BETWEEN clause of the SELECT statement to create a
Flashback Version Query.
https://docs.oracle.com/cd/B14117_01/appdev.101/b10795/adfns_fl.htm
------------------
I learned from Tom Kyte that in 12c many insert triggers (also for Apex 5 programs) are obsolete.
https://youtu.be/-mK75UyIblQ?t=1m
Minute 1.00 bis 2:14
Code Listing 11: A new template for common rows initialization
create sequence department_seq;
create table departments (
department_id -- number generated by default on null as identity -- oder explizit
number default on null department_seq.nextval
constraint departments_id_pk primary key,
name varchar2(255) not null,
location varchar2(4000),
country varchar2(4000),
created date default on null sysdate not null,
created_by varchar2(255) default on null nvl(sys_context('APEX$SESSION','APP_USER'),user) not null,
updated date default on null sysdate not null,
updated_by varchar2(255) default on null nvl(sys_context('APEX$SESSION','APP_USER'),user) not null
)
;
-- triggers
create or replace trigger tgr_departments_bu
before update
on departments
for each row
begin
:new.updated := sysdate;
:new.updated_by := nvl(sys_context('APEX$SESSION','APP_USER'),user);
end departments_biu;
/
------------------
Catch the Apex Session Context:
I have learned that with the option
exec dbms_flashback_archive.set_context_level ('ALL');
the entire context including custom context is recorded.
The following query returns in the Apex environment (z.B. Apex 5/ SQL Workshop / SQL Commands)
3 Session Variables that are probably recorded. (Namespace APEX$SESSION, Attribute: APP_SESSION, APP_USER, WORKSPACE_ID)
SELECT * FROM SESSION_CONTEXT;
-- Then there are also a bunch of variables in the namespace USERENV:
select
sys_context ('userenv','ACTION') ACTION,
sys_context ('userenv','AUDITED_CURSORID') AUDITED_CURSORID,
sys_context ('userenv','AUTHENTICATED_IDENTITY') AUTHENTICATED_IDENTITY,
sys_context ('userenv','AUTHENTICATION_DATA') AUTHENTICATION_DATA,
sys_context ('userenv','AUTHENTICATION_METHOD') AUTHENTICATION_METHOD,
sys_context ('userenv','BG_JOB_ID') BG_JOB_ID,
sys_context ('userenv','CLIENT_IDENTIFIER') CLIENT_IDENTIFIER,
sys_context ('userenv','CLIENT_INFO') CLIENT_INFO,
sys_context ('userenv','CURRENT_BIND') CURRENT_BIND,
sys_context ('userenv','CURRENT_EDITION_ID') CURRENT_EDITION_ID,
sys_context ('userenv','CURRENT_EDITION_NAME') CURRENT_EDITION_NAME,
sys_context ('userenv','CURRENT_SCHEMA') CURRENT_SCHEMA,
sys_context ('userenv','CURRENT_SCHEMAID') CURRENT_SCHEMAID,
sys_context ('userenv','CURRENT_SQL') CURRENT_SQL,
sys_context ('userenv','CURRENT_SQLn') CURRENT_SQLn,
sys_context ('userenv','CURRENT_SQL_LENGTH') CURRENT_SQL_LENGTH,
sys_context ('userenv','CURRENT_USER') CURRENT_USER,
sys_context ('userenv','CURRENT_USERID') CURRENT_USERID,
sys_context ('userenv','DATABASE_ROLE') DATABASE_ROLE,
sys_context ('userenv','DB_DOMAIN') DB_DOMAIN,
sys_context ('userenv','DB_NAME') DB_NAME,
sys_context ('userenv','DB_UNIQUE_NAME') DB_UNIQUE_NAME,
sys_context ('userenv','DBLINK_INFO') DBLINK_INFO,
sys_context ('userenv','ENTRYID') ENTRYID,
sys_context ('userenv','ENTERPRISE_IDENTITY') ENTERPRISE_IDENTITY,
sys_context ('userenv','FG_JOB_ID') FG_JOB_ID,
sys_context ('userenv','GLOBAL_CONTEXT_MEMORY') GLOBAL_CONTEXT_MEMORY,
sys_context ('userenv','GLOBAL_UID') GLOBAL_UID,
sys_context ('userenv','HOST') HOST,
sys_context ('userenv','IDENTIFICATION_TYPE') IDENTIFICATION_TYPE,
sys_context ('userenv','INSTANCE') INSTANCE,
sys_context ('userenv','INSTANCE_NAME') INSTANCE_NAME,
sys_context ('userenv','IP_ADDRESS') IP_ADDRESS,
sys_context ('userenv','ISDBA') ISDBA,
sys_context ('userenv','LANG') LANG,
sys_context ('userenv','LANGUAGE') LANGUAGE,
sys_context ('userenv','MODULE') MODULE,
sys_context ('userenv','NETWORK_PROTOCOL') NETWORK_PROTOCOL,
sys_context ('userenv','NLS_CALENDAR') NLS_CALENDAR,
sys_context ('userenv','NLS_CURRENCY') NLS_CURRENCY,
sys_context ('userenv','NLS_DATE_FORMAT') NLS_DATE_FORMAT,
sys_context ('userenv','NLS_DATE_LANGUAGE') NLS_DATE_LANGUAGE,
sys_context ('userenv','NLS_SORT') NLS_SORT,
sys_context ('userenv','NLS_TERRITORY') NLS_TERRITORY,
sys_context ('userenv','OS_USER') OS_USER,
sys_context ('userenv','POLICY_INVOKER') POLICY_INVOKER,
sys_context ('userenv','PROXY_ENTERPRISE_IDENTITY') PROXY_ENTERPRISE_IDENTITY,
sys_context ('userenv','PROXY_USER') PROXY_USER,
sys_context ('userenv','PROXY_USERID') PROXY_USERID,
sys_context ('userenv','SERVER_HOST') SERVER_HOST,
sys_context ('userenv','SERVICE_NAME') SERVICE_NAME,
sys_context ('userenv','SESSION_EDITION_ID') SESSION_EDITION_ID,
sys_context ('userenv','SESSION_EDITION_NAME') SESSION_EDITION_NAME,
sys_context ('userenv','SESSION_USER') SESSION_USER,
sys_context ('userenv','SESSION_USERID') SESSION_USERID,
sys_context ('userenv','SESSIONID') SESSIONID,
sys_context ('userenv','SID') SID,
sys_context ('userenv','STATEMENTID') STATEMENTID,
sys_context ('userenv','TERMINAL') TERMINAL
from dual;
-- If I combine this correctly, the history query (Code Listing 10) would need to be expandable as follows:
SQL> select EMPLOYEE_ID, FIRST_NAME, VACATION_BALANCE,
VERSIONS_STARTTIME TS,
nvl(dbms_flashback_archive.get_sys_context(versions_xid,('APEX$SESSION','APP_USER'),
dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','SESSION_USER')) AS who,
dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','MODULE') AS program,
dbms_flashback_archive.get_sys_context(versions_xid, 'APEX$SESSION','APP_SESSION') AS app_session
dbms_flashback_archive.get_sys_context(versions_xid, 'APEX$SESSION','WORKSPACE_ID') AS workspace_id
from EMPLOYEES
versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
where EMPLOYEE_ID = 100
order by EMPLOYEE_ID, ts;
----
Here are some Screenshots from my Schema Riser Application that displays a Change Log Report.
----
Regards
Dirk Strack
I often had the same performance problem as Barbara in the following article:
http://www.oracle.com/technetwork/issue-archive/2016/16-mar/o26performance-2925662.html
-----------
The better way leads use to use flashback archive as a table property.
The most important new features are:
* Own logging can be omitted because every change is available as a chronological log.
* A chronological protocol helps to identify unnecessary program steps (for example, delete and re-create rather than merge) and thus optimize.
* For debugging, the state of the database can be retrieved before a program crashes.
* Synchronization jobs that regularly process last updates only need to remember the last call time
to find the last updates on the next call.
* Accounting rules can be adhered to.
- The assignment of data entry to a responsible person and time.
- Entries are not overwritten but automatically canceled and entered again
and that is not the quality of any program. It can not be bypassed.
-----------
I'm glad that these flashback methods in Oracle 12c are now finally available and that they could easily be used. If the FLASHBACK ARCHIVE and ROW ARCHIVAL properties are set for all normal tables in a schema (for recording time) and dbms_flashback_archive.set_context_level has been set (for recording context inc. responsible persons), then an Immutable Database is created with completely new, really wonderful features. When points in time is available as an additional dimension to all objects, then objects become facts and the "Epochal Time Model" can be used.
Quick learning:
(Rich Hickey: Database as a Value)
https://youtu.be/EKdV1IgAaFc?t=12m13s
Minute 12 to 16
Famous applications that make use of this model are the Mac OS Time Machine from Apple and Source Tree for git.
----
and for Oracle see:
https://youtu.be/ZU9OnoHhbNM?t=26m15s
--- Flashback Data Archive ---
* Now available in all licensed editions
* Replaces the trigger approach to populate history tables
* Designate a tablespace to hold history data along with the retention period for the data
+ Specify FLASHBACK ARCHIVE for tables ore ALTER TABLE
* Flashback data can be used
* to look at historical data at a point in time
* to correct data committed to a base table
* Supported with most of the common DDL commands against the source table
--- In-Database Archiving ---
* Enhancement to Flashback Data Archive
* Mark table rows as invisible (logical delete)
* ALTER TABLE ... ROW ARCHIVAL
----
Relevant information from Connor McDonalds article in excerpts:
discover Oracle Flashback’s Oracle Flashback Data Archive feature:
“Flashback Data Archive provides the ability to track and store all transactional
changes to a table over its lifetime. It is no longer necessary to build
this intelligence into your application. A Flashback Data Archive is useful for
compliance with record stage policies and audit reports.”
... a better way.:
First she allocates some storage to hold her archive data:
SQL> create tablespace SPACE_FOR_ARCHIVE
2 datafile 'C:\ORACLE\DB12\ARCH_SPACE.DBF'
3 size 50G;
Tablespace created.
Next, she creates a new structure, called a flashback archive, which defines the retention duration of the row change history for any table placed into it.
SQL> CREATE FLASHBACK ARCHIVE longterm
2 TABLESPACE space_for_archive
3 RETENTION 1 YEAR;
Flashback archive created
And then she simply associates her HR tables with the flashback archive.
SQL> ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE LONGTERM;
Table altered.
Code Listing 8: Execution plan for tables associated with Flashback Data Archive
SQL> set autotrace traceonly explain
SQL> select * from EMPLOYEES AS OF TIMESTAMP SYSDATE-3;
Code Listing 9: Flashback Version Query
SQL> select EMPLOYEE_ID, FIRST_NAME, JOB_ID, VACATION_BALANCE,
2 VERSIONS_STARTTIME TS,
3 nvl(VERSIONS_OPERATION,'I') OP
4 from EMPLOYEES
5 versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
6 where EMPLOYEE_ID = 100
7 order by EMPLOYEE_ID, ts;
Enable the capture of context-level information automatically and without changes to the base tables.
Barbara contacts her administrator and requests context-level tracking to be enabled for Flashback Data Archive:
SQL> exec dbms_flashback_archive.set_context_level('ALL');
Code Listing 10: Flashback Version Query with context information
SQL> select EMPLOYEE_ID, FIRST_NAME, VACATION_BALANCE,
2 VERSIONS_STARTTIME TS,
3 dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','SESSION_USER') AS who,
4 dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','MODULE') AS program
5 from EMPLOYEES
6 versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
7 where EMPLOYEE_ID = 100
8 order by EMPLOYEE_ID, ts;
-------------------
Oracle Flashback Version Query - retrieve metadata and historical data
for a specific time interval. You can view all the rows of a table that
ever existed during a given time interval. Metadata about the different
versions of rows includes start and end time, type of change operation,
and identity of the transaction that created the row version. You use
the VERSIONS BETWEEN clause of the SELECT statement to create a
Flashback Version Query.
https://docs.oracle.com/cd/B14117_01/appdev.101/b10795/adfns_fl.htm
------------------
I learned from Tom Kyte that in 12c many insert triggers (also for Apex 5 programs) are obsolete.
https://youtu.be/-mK75UyIblQ?t=1m
Minute 1.00 bis 2:14
Code Listing 11: A new template for common rows initialization
create sequence department_seq;
create table departments (
department_id -- number generated by default on null as identity -- oder explizit
number default on null department_seq.nextval
constraint departments_id_pk primary key,
name varchar2(255) not null,
location varchar2(4000),
country varchar2(4000),
created date default on null sysdate not null,
created_by varchar2(255) default on null nvl(sys_context('APEX$SESSION','APP_USER'),user) not null,
updated date default on null sysdate not null,
updated_by varchar2(255) default on null nvl(sys_context('APEX$SESSION','APP_USER'),user) not null
)
;
-- triggers
create or replace trigger tgr_departments_bu
before update
on departments
for each row
begin
:new.updated := sysdate;
:new.updated_by := nvl(sys_context('APEX$SESSION','APP_USER'),user);
end departments_biu;
/
------------------
Catch the Apex Session Context:
I have learned that with the option
exec dbms_flashback_archive.set_context_level ('ALL');
the entire context including custom context is recorded.
The following query returns in the Apex environment (z.B. Apex 5/ SQL Workshop / SQL Commands)
3 Session Variables that are probably recorded. (Namespace APEX$SESSION, Attribute: APP_SESSION, APP_USER, WORKSPACE_ID)
SELECT * FROM SESSION_CONTEXT;
-- Then there are also a bunch of variables in the namespace USERENV:
select
sys_context ('userenv','ACTION') ACTION,
sys_context ('userenv','AUDITED_CURSORID') AUDITED_CURSORID,
sys_context ('userenv','AUTHENTICATED_IDENTITY') AUTHENTICATED_IDENTITY,
sys_context ('userenv','AUTHENTICATION_DATA') AUTHENTICATION_DATA,
sys_context ('userenv','AUTHENTICATION_METHOD') AUTHENTICATION_METHOD,
sys_context ('userenv','BG_JOB_ID') BG_JOB_ID,
sys_context ('userenv','CLIENT_IDENTIFIER') CLIENT_IDENTIFIER,
sys_context ('userenv','CLIENT_INFO') CLIENT_INFO,
sys_context ('userenv','CURRENT_BIND') CURRENT_BIND,
sys_context ('userenv','CURRENT_EDITION_ID') CURRENT_EDITION_ID,
sys_context ('userenv','CURRENT_EDITION_NAME') CURRENT_EDITION_NAME,
sys_context ('userenv','CURRENT_SCHEMA') CURRENT_SCHEMA,
sys_context ('userenv','CURRENT_SCHEMAID') CURRENT_SCHEMAID,
sys_context ('userenv','CURRENT_SQL') CURRENT_SQL,
sys_context ('userenv','CURRENT_SQLn') CURRENT_SQLn,
sys_context ('userenv','CURRENT_SQL_LENGTH') CURRENT_SQL_LENGTH,
sys_context ('userenv','CURRENT_USER') CURRENT_USER,
sys_context ('userenv','CURRENT_USERID') CURRENT_USERID,
sys_context ('userenv','DATABASE_ROLE') DATABASE_ROLE,
sys_context ('userenv','DB_DOMAIN') DB_DOMAIN,
sys_context ('userenv','DB_NAME') DB_NAME,
sys_context ('userenv','DB_UNIQUE_NAME') DB_UNIQUE_NAME,
sys_context ('userenv','DBLINK_INFO') DBLINK_INFO,
sys_context ('userenv','ENTRYID') ENTRYID,
sys_context ('userenv','ENTERPRISE_IDENTITY') ENTERPRISE_IDENTITY,
sys_context ('userenv','FG_JOB_ID') FG_JOB_ID,
sys_context ('userenv','GLOBAL_CONTEXT_MEMORY') GLOBAL_CONTEXT_MEMORY,
sys_context ('userenv','GLOBAL_UID') GLOBAL_UID,
sys_context ('userenv','HOST') HOST,
sys_context ('userenv','IDENTIFICATION_TYPE') IDENTIFICATION_TYPE,
sys_context ('userenv','INSTANCE') INSTANCE,
sys_context ('userenv','INSTANCE_NAME') INSTANCE_NAME,
sys_context ('userenv','IP_ADDRESS') IP_ADDRESS,
sys_context ('userenv','ISDBA') ISDBA,
sys_context ('userenv','LANG') LANG,
sys_context ('userenv','LANGUAGE') LANGUAGE,
sys_context ('userenv','MODULE') MODULE,
sys_context ('userenv','NETWORK_PROTOCOL') NETWORK_PROTOCOL,
sys_context ('userenv','NLS_CALENDAR') NLS_CALENDAR,
sys_context ('userenv','NLS_CURRENCY') NLS_CURRENCY,
sys_context ('userenv','NLS_DATE_FORMAT') NLS_DATE_FORMAT,
sys_context ('userenv','NLS_DATE_LANGUAGE') NLS_DATE_LANGUAGE,
sys_context ('userenv','NLS_SORT') NLS_SORT,
sys_context ('userenv','NLS_TERRITORY') NLS_TERRITORY,
sys_context ('userenv','OS_USER') OS_USER,
sys_context ('userenv','POLICY_INVOKER') POLICY_INVOKER,
sys_context ('userenv','PROXY_ENTERPRISE_IDENTITY') PROXY_ENTERPRISE_IDENTITY,
sys_context ('userenv','PROXY_USER') PROXY_USER,
sys_context ('userenv','PROXY_USERID') PROXY_USERID,
sys_context ('userenv','SERVER_HOST') SERVER_HOST,
sys_context ('userenv','SERVICE_NAME') SERVICE_NAME,
sys_context ('userenv','SESSION_EDITION_ID') SESSION_EDITION_ID,
sys_context ('userenv','SESSION_EDITION_NAME') SESSION_EDITION_NAME,
sys_context ('userenv','SESSION_USER') SESSION_USER,
sys_context ('userenv','SESSION_USERID') SESSION_USERID,
sys_context ('userenv','SESSIONID') SESSIONID,
sys_context ('userenv','SID') SID,
sys_context ('userenv','STATEMENTID') STATEMENTID,
sys_context ('userenv','TERMINAL') TERMINAL
from dual;
-- If I combine this correctly, the history query (Code Listing 10) would need to be expandable as follows:
SQL> select EMPLOYEE_ID, FIRST_NAME, VACATION_BALANCE,
VERSIONS_STARTTIME TS,
nvl(dbms_flashback_archive.get_sys_context(versions_xid,('APEX$SESSION','APP_USER'),
dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','SESSION_USER')) AS who,
dbms_flashback_archive.get_sys_context(versions_xid, 'USERENV','MODULE') AS program,
dbms_flashback_archive.get_sys_context(versions_xid, 'APEX$SESSION','APP_SESSION') AS app_session
dbms_flashback_archive.get_sys_context(versions_xid, 'APEX$SESSION','WORKSPACE_ID') AS workspace_id
from EMPLOYEES
versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
where EMPLOYEE_ID = 100
order by EMPLOYEE_ID, ts;
----
Here are some Screenshots from my Schema Riser Application that displays a Change Log Report.
Screenshot 1
Screenshot 2
----
Regards
Dirk Strack
Montag, 3. April 2017
Calculate your working hours
The following query calculates your working hours by condensing the APEX_WORKSPACE_ACTIVITY_LOG.:
select distinct APEX_USER, TO_CHAR(MIN_DATE, 'Day DD.Mon.') START_DATE, TO_CHAR(MIN_DATE, 'HH24:MI') START_TIME, TO_CHAR(MAX_DATE, 'HH24:MI') STOP_TIME, EXTRACT(HOUR FROM WORK_TIME_PERIOD) || ':' || LTRIM(TO_CHAR(ROUND( EXTRACT(MINUTE FROM WORK_TIME_PERIOD) + EXTRACT(SECOND FROM WORK_TIME_PERIOD) / 60, 0), '09')) WORKING_HOURS, EXTRACT(HOUR FROM BREAK_TIME_PERIOD) || ':' || LTRIM(TO_CHAR(ROUND( EXTRACT(MINUTE FROM BREAK_TIME_PERIOD) + EXTRACT(SECOND FROM BREAK_TIME_PERIOD) / 60, 0), '09')) BREAK_HOURS, APPLICATION_NAME, PAGE_NAME, EVENT_CNT, GID from ( select distinct APEX_USER, MIN_DATE, MAX_DATE, ((MAX_DATE - MIN_DATE) DAY(9) TO SECOND) WORK_TIME_PERIOD, ((LEAD(MIN_DATE) OVER (PARTITION BY APEX_USER ORDER BY MIN_DATE ASC) - MAX_DATE) DAY(9) TO SECOND) BREAK_TIME_PERIOD, APPLICATION_NAME, PAGE_NAME, EVENT_CNT, GID from ( select distinct APEX_USER, GID, MIN(VIEW_DATE) OVER (PARTITION BY APEX_USER, GID) MIN_DATE, MAX(VIEW_DATE) OVER (PARTITION BY APEX_USER, GID) MAX_DATE, COUNT(VIEW_DATE) OVER (PARTITION BY APEX_USER, GID) EVENT_CNT, FIRST_VALUE(APPLICATION_NAME) OVER(PARTITION BY APEX_USER, GID ORDER BY VIEW_DATE DESC) APPLICATION_NAME, FIRST_VALUE(PAGE_NAME) OVER(PARTITION BY APEX_USER, GID ORDER BY VIEW_DATE DESC) PAGE_NAME from ( select APEX_USER, APPLICATION_NAME, PAGE_NAME, VIEW_DATE, SUM(GAP) OVER (PARTITION BY APEX_USER ORDER BY VIEW_DATE ASC) GID from ( select APEX_USER, APPLICATION_NAME, PAGE_NAME, VIEW_DATE, CASE WHEN VIEW_DATE - FIRST_VALUE(VIEW_DATE) OVER(PARTITION BY APEX_USER ORDER BY VIEW_DATE ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) < 1 / 1440 * 20 THEN 0 ELSE 1 END AS GAP from APEX_WORKSPACE_ACTIVITY_LOG ) ) ) ) where APEX_USER = V('APP_USER') and MIN_DATE > SYSDATE - 3 order by GID;
Mittwoch, 22. März 2017
Managing Gigabytes of files
Vor einiger Zeit hatte ich ein Apex Projekt für die Verteilung von Dokumenten und Fotos.
Dabei sollten die Dokumente in einer Ordner Hierarchie angezeigt und zum Download bereitgestellt werden. Es sollten dann über 3GB an Dateien veröffentlicht werden. Da bot es sich an Zip-Archive für den Upload und Download zu verwenden.
Die open source library as_zip enthielt die dafür benötigten Funktionen.
https://technology.amis.nl/2010/06/09/parsing-a-microsoft-word-docx-and-unzip-zipfiles-with-plsql/
Dabei sollten die Dokumente in einer Ordner Hierarchie angezeigt und zum Download bereitgestellt werden. Es sollten dann über 3GB an Dateien veröffentlicht werden. Da bot es sich an Zip-Archive für den Upload und Download zu verwenden.
Die open source library as_zip enthielt die dafür benötigten Funktionen.
https://technology.amis.nl/2010/06/09/parsing-a-microsoft-word-docx-and-unzip-zipfiles-with-plsql/
Bei Auslesen größerer Archive mit tausenden Dateien mit der Funktion as_zip.get_file versagte jedoch die Performance. Die Ursache war schnell gefunden: Die Funktion macht eine sequenziellen Scann durch das Archiv bis der übergebene Datei Pfadname gefunden wird. Vor dem Vergleich muss der binäre Inhalt auch noch nach Unicode konvertiert werden.
Es wird also ein Index für das Archiv benötigt, damit die Funktion as_zip.get_file direkt zugreifen kann. Nachdem ich mit der Prozedur as_zip.get_file_date_list schon eine Datumsliste aus dem Verzeichnis erstellt hatte, war es nicht schwer auch den Datei-Offset für jede enthaltene Datei in einem Array zu kalkulieren. Damit konnte as_zip.get_file nun richtig loslegen und große Archive in proportionaler Zeit zur Dateigröße einlesen.
Um den Prozess weiter zu beschleunigen, habe ich die library unzip_parallel entwickelt. Diese library liest ein Zip-Archiv aus einer Sql-Tabelle und schreibt alle enthaltenen Dateien in zwei Tabellen.
Ein für die Dateien and die zweite für die Ordner Struktur als rekursive parent - child Relation.
(So kann die Ordner Struktur einfache in einer Tree-Region dargestellt werden.)
Für größere Archive verwendet die library das oracle package dbms_parallel_execute.
Damit lies sich die Verarbeitungszeit noch einmal halbieren.
Die Funktionalität dieser library habe ich nun als Apex Plug-in veröffentlicht:
Process plug-in for reading a zip file from a table, storing all expanded files
in one table and the folders for the files in a second table.
The table for the files has a least the two columns for file_name varchar2, file_content blob
and optionally file_date date, file_size number, mime_type varchar2(300), folder_id number.
The table for the folders has at least a folder_id number, parent_id number, folder_name varchar2.
When no folder definition is provided in the Folder Query attribute, full pathnames are stored in the file_name field of the files table.
Zip file larger than 5MB will be processed in parallel to reduce the processing time when parallel execution is enabled.
A demo of the plugin can be found here:
Can I have the Last-Modified Date - for that files?
In einem meiner letzten Projekte für Gebäude Management wurden Rechnungen und andere Dokumente gescannt und in die Datenbank geladen. Für die Geschäftsleute ist das Dokumenten-Datum neben dem Namen sehr wichtig. In der Apex 4 Umgebung gab es dafür keine vollständige Unterstützung. So wurde das Datum dann manuell eingegeben.
Das wollte ich ändern:
1. Beim Upload einer Datei in einer Webanwendung kann das letzte Änderungsdatum mit Javascript ausgelesen und so in ein Input Feld neben dem Namen übertragen werden.
2. Beim Download über einen normalen Link kann nur der Dateiname und die Größe im Header gesetzt werden. Wird aus dem Inhalt zunächst ein eine Zip-Datei erstellt und diese dann heruntergeladen, kann beim entpacken das richtige Änderungsdatum gesetzt werden.
Ich fand mit Netz eine coole open source library as_zip von Anton Scheffer.
https://technology.amis.nl/2010/06/09/parsing-a-microsoft-word-docx-and-unzip-zipfiles-with-plsql/
Diese library habe ich so angepasst, das nun das Dokument-Datum in der Funktion add1file übergeben werden kann.
3. Beim Upload eines Zip-Archivs kann das Datei-Datum jeder enthaltenen Datei aus dem Archiv extrahiert werden. Für das Auslesen mit Datei-Datum habe ich die Prozedur get_file_date_list hinzugefügt.
Das wollte ich ändern:
1. Beim Upload einer Datei in einer Webanwendung kann das letzte Änderungsdatum mit Javascript ausgelesen und so in ein Input Feld neben dem Namen übertragen werden.
function GetFileNameDate(p_FileItem, pNameItem, pDateItem) { $x(pNameItem).value = $x(p_FileItem).value; var input = $x(p_FileItem); if (input && input.files && input.files[0]) { var fdate = input.files[0].lastModifiedDate; var day = fdate.getDate(); var mon = fdate.getMonth() + 1; var year = fdate.getFullYear(); var hour = fdate.getHours(); var min = fdate.getMinutes(); if (day < 10) { day = '0' + day; } if (mon < 10) { mon = '0' + mon; } if (year < 10) { year = '200' + year; } else if (year < 100) { year = '20' + year; } if (hour < 10) { hour = '0' + hour; } if (min < 10) { min = '0' + min; } var datestr = day + "." + mon + "." + year + " " + hour + ":" + min; $x(pDateItem).value = datestr; } }
2. Beim Download über einen normalen Link kann nur der Dateiname und die Größe im Header gesetzt werden. Wird aus dem Inhalt zunächst ein eine Zip-Datei erstellt und diese dann heruntergeladen, kann beim entpacken das richtige Änderungsdatum gesetzt werden.
Ich fand mit Netz eine coole open source library as_zip von Anton Scheffer.
https://technology.amis.nl/2010/06/09/parsing-a-microsoft-word-docx-and-unzip-zipfiles-with-plsql/
3. Beim Upload eines Zip-Archivs kann das Datei-Datum jeder enthaltenen Datei aus dem Archiv extrahiert werden. Für das Auslesen mit Datei-Datum habe ich die Prozedur get_file_date_list hinzugefügt.
Donnerstag, 23. Februar 2017
Be or not - for a table row
In all meinen Datenbankprojekten habe ich das Datenbankschema in der 3. Normalform mit vielen Fremdschlüsseln verwendet. Wenn Sie Formulare erstellen, um die Programmdaten zu pflegen, wird normalerweise eine Löschen Button hinzugefügt. Falls Sie eine Seite vom Typ 'DML Form' mit einem 'Delete' Button erstellt haben, wird diese Schaltfläche für jede vorhandene Zeile angezeigt.
Wenn der Anwendungsbenutzer versucht, eine Zeile zu löschen, die direkt oder indirekt in einer untergeordneten Tabelle referenziert wird die durch einen Fremdschlüssel mit einer delete_rule von Typ 'NO ACTION' ist, dann wird das System einen ORA-02292 Fehler zurückgeben.
Ich denke, es wäre besser, den Löschen Button nicht anzuzeigen, wenn die Zeile existieren muss.
Um dieses Problem zu lösen, habe ich die Ansicht V_DELETE_CHECK geschrieben.
Für jede Tabelle im aktuellen Schema erzeugt die Ansicht eine Unterabfrage, die mit einem gegebenen Tabellennamen und einem Primärschlüsselwert testen kann, ob die Zeile löschbar ist oder nicht. Die Ansicht unterstützt tiefe Hierarchien von delete cascading rules, Es unterstützt zusammengesetzte Fremdschlüssel und rekursive Beziehungen wie Baum Tabellen.
Jetzt können Sie dieses Plugin verwenden, um die Löschtaste zu verbergen, wenn eine Tabellenzeile nicht gelöscht werden kann.
---
In all of my database projects, i have used database schemata in the 3. normal form with lots of foreign keys. When you produce forms to maintain the program data, usually a delete button is added. In case you have created a page of type 'DML Form' with a 'Delete' Button, then this button is shown for each existing row.
When the application user tries to delete a row that is referenced directly or indirectly in a child table
by a foreign key with a delete_rule of 'NO ACTION', then the system will return an error. The "ORA-02292: integrity constraint <constraint name> violated - child record found" error message. I think it would be better to not show the delete button when the row must exit. To solve this problem, I have written the view V_DELETE_CHECK. For every table in the current schema, the view produces a subquery that can test a given table_name and primary key value, whether the row is deletable or not. The view support deep hierarchies of on delete cascading rules, It support composite foreign keys and recursive relations like tree tables.
Now you can use this plugin to hide the delete button when a table row can not be deleted.
today I have released my first apex plugin on apex.world.
strack-software-delete-check-plugin
Wenn der Anwendungsbenutzer versucht, eine Zeile zu löschen, die direkt oder indirekt in einer untergeordneten Tabelle referenziert wird die durch einen Fremdschlüssel mit einer delete_rule von Typ 'NO ACTION' ist, dann wird das System einen ORA-02292 Fehler zurückgeben.
Ich denke, es wäre besser, den Löschen Button nicht anzuzeigen, wenn die Zeile existieren muss.
Um dieses Problem zu lösen, habe ich die Ansicht V_DELETE_CHECK geschrieben.
Für jede Tabelle im aktuellen Schema erzeugt die Ansicht eine Unterabfrage, die mit einem gegebenen Tabellennamen und einem Primärschlüsselwert testen kann, ob die Zeile löschbar ist oder nicht. Die Ansicht unterstützt tiefe Hierarchien von delete cascading rules, Es unterstützt zusammengesetzte Fremdschlüssel und rekursive Beziehungen wie Baum Tabellen.
Jetzt können Sie dieses Plugin verwenden, um die Löschtaste zu verbergen, wenn eine Tabellenzeile nicht gelöscht werden kann.
---
In all of my database projects, i have used database schemata in the 3. normal form with lots of foreign keys. When you produce forms to maintain the program data, usually a delete button is added. In case you have created a page of type 'DML Form' with a 'Delete' Button, then this button is shown for each existing row.
When the application user tries to delete a row that is referenced directly or indirectly in a child table
by a foreign key with a delete_rule of 'NO ACTION', then the system will return an error. The "ORA-02292: integrity constraint <constraint name> violated - child record found" error message. I think it would be better to not show the delete button when the row must exit. To solve this problem, I have written the view V_DELETE_CHECK. For every table in the current schema, the view produces a subquery that can test a given table_name and primary key value, whether the row is deletable or not. The view support deep hierarchies of on delete cascading rules, It support composite foreign keys and recursive relations like tree tables.
Now you can use this plugin to hide the delete button when a table row can not be deleted.
strack-software-delete-check-plugin
Abonnieren
Posts (Atom)