Contact

Email laurentschneider@yahoo.com

26 thoughts on “Contact

  1. Devang Thakkar

    Hi Laurent,
    I am not sure whether I should be asking this question to you but
    I would like to know how can I buy your book in India?
    Is there any Indian print planned by your Publisher?

    Thanks.

  2. Ray

    Hello Laurent,

    How are you doing?

    I am just starting with databases so I think that your website is really good. Is there any better way to contact you?

    I hope to hear from you.

    Regards,

    Ray Silva

  3. Ildiko

    Hi,

    Can you give me a hint for this?

    SQL> conn /as sysoper;
    Connected.
    SQL> conn /
    ERROR:
    ORA-01017: invalid username/password; logon denied

    Warning: You are no longer connected to ORACLE.

    Thank you,
    SIA

  4. Meena

    Hi Laurent,
    How can I import data from an excel file into a table?
    Any link in your site regarding this would be of great help.

    Your link for spooling table values into excel was of great great help.

  5. Eugene

    Hi Laurent,
    I wonder if you can help me to solve a problem in sql. I need to be able to display the last (max) entry for each group in the column. Here is the setup code:
    ——————-
    create table test_tb
    (scale_code VARCHAR2(4));

    insert into test_tb values (‘B01’);
    insert into test_tb values (‘B02’);
    insert into test_tb values (‘B03’);
    insert into test_tb values (‘B04’);
    insert into test_tb values (‘B05’);
    insert into test_tb values (‘BP1’);
    insert into test_tb values (‘BP2’);
    insert into test_tb values (‘BP3’);
    insert into test_tb values (‘BR3’);
    insert into test_tb values (‘BU1’);
    ——————

    So, I wonder if I can get:
    B05,
    BP3,
    BR3,
    BU1
    using “select” in sql.

    Thanks,
    Eugene

  6. Wagner

    Hi Laurent…

    I met today your site.

    Your articles are really very goods.

    Your OCM certification is evidenced by the excellent quality of the material.

    I hope one day to reach the OCM, still studying for this.

    With your permission, I would like to add your site as one of the addresses listed on my blog.

    Best regards!!!

    Wagner Pinheiro

  7. Dushyant S Yadav

    Hi Laurent,

    I wud like to move to oracle DBA…I have just gone thru ur site…its really holding fabulous contents…Can I connect u thru mail ids mentioned above…

    Best Rgds
    Dushyant

  8. Eugene

    Hi Laurent,
    I am trying to come up with pivot query for our packaged app audit table.
    Any change to any column is written as a new row. I came up with smaller scale example of it.
    Here is the test data:
    ————————————–
    create table ep_pivot
    (sales_order_prefix varchar2(3),
    sales_order_number number(10),
    audit_timestamp date,
    customer_purchase_order varchar2(25),
    ship_address_id number(10),
    ship_address_type_id varchar2(4)
    );

    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’02/27/2001 08:19:12′, ‘MM/DD/YYYY HH24:MI:SS’), 13469,
    ‘BILL’, ‘3/23 AD BACKUP’);
    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’02/27/2001 08:19:39′, ‘MM/DD/YYYY HH24:MI:SS’), 13469,
    ‘BILL’, ‘645265/669601′);
    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’02/27/2001 08:20:38′, ‘MM/DD/YYYY HH24:MI:SS’), 13469,
    ‘BILL’, ‘645265/669601′);
    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’02/27/2001 08:20:53′, ‘MM/DD/YYYY HH24:MI:SS’), 13469,
    ‘BILL’, ‘645265/669601′);
    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’02/27/2001 08:21:18′, ‘MM/DD/YYYY HH24:MI:SS’), 13469,
    ‘BILL’, ‘645265’);
    Insert into ep_pivot
    (BUSINESS_UNIT_ID, SALES_ORDER_PREFIX, SALES_ORDER_NUMBER, AUDIT_TIMESTAMP, SHIP_ADDRESS_ID,
    SHIP_ADDRESS_TYPE_ID, CUSTOMER_PURCHASE_ORDER)
    Values
    (’81’, ‘J’, 64850, TO_DATE(’03/14/2001 22:53:31′, ‘MM/DD/YYYY HH24:MI:SS’), 21321,
    ‘DC’, ‘645265’);
    COMMIT;

    Here is my query:
    ———————
    Select business_unit_id, sales_order_prefix, sales_order_number, ts,
    — PO
    max( decode( rn, 1, customer_purchase_order, null )) original_cust_po
    ,max( decode( rn, 2, customer_purchase_order, null )) changed_cust_po
    ,max( decode( rn, 3, customer_purchase_order, null )) changed_cust_po
    ,max( decode( rn, 4, customer_purchase_order, null )) changed_cust_po
    ,max( decode( rn, 5, customer_purchase_order, null )) changed_cust_po
    ,max( decode( rn, 6, customer_purchase_order, null )) changed_cust_po
    — Address Id
    ,max( decode( rn, 1, ship_address_id, null )) original_ship_address_id
    ,max( decode( rn, 2, ship_address_id, null )) changed_ship_address_id
    ,max( decode( rn, 3, ship_address_id, null )) changed_ship_address_id
    ,max( decode( rn, 4, ship_address_id, null )) changed_ship_address_id
    ,max( decode( rn, 5, ship_address_id, null )) changed_ship_address_id
    ,max( decode( rn, 6, ship_address_id, null )) changed_ship_address_id
    — Address Type
    ,max( decode( rn, 1, ship_address_type_id, null )) original_ship_address_type_id
    ,max( decode( rn, 2, ship_address_type_id, null )) changed_ship_address_type_id
    ,max( decode( rn, 3, ship_address_type_id, null )) changed_ship_address_type_id
    ,max( decode( rn, 4, ship_address_type_id, null )) changed_ship_address_type_id
    ,max( decode( rn, 5, ship_address_type_id, null )) changed_ship_address_type_id
    ,max( decode( rn, 6, ship_address_type_id, null )) changed_ship_address_type_id
    from (
    select business_unit_id,sales_order_prefix,sales_order_number, audit_timestamp ts, customer_purchase_order,ship_address_id,ship_address_type_id,
    row_number() over (partition by sales_order_prefix, sales_order_number order by audit_timestamp) rn
    from ep_pivot
    order by audit_timestamp
    )
    group by business_unit_id,sales_order_prefix, sales_order_number,ts,ship_address_id,ship_address_type_id

    I wonder if it is possible (using sql) to show a value ONLY if it is changed. In my example “customer_purchase_order” didn’t change during 4 inserts.

    Thanks,
    Eugene

  9. James

    Hi Laurent,
    Re. your very interesting entry on sftp, I have a similar brief at the moment, in that I’ve been tasked with altering a PL/SQL package to use sftp(or scp) rather than ftp.
    Could you point me at an example of how this is achieved in PL/SQL?
    All the example I can Google are ftp only.
    Thanks in advance
    James

  10. Marek

    Hi Laurent,
    I wonder whether you may be able to give me any hint how to approach the problem
    I’m struggling with

    I have a historical table with 1B+ rows
    like

    DT C1 C2
    07/01/2010 Null 1
    07/03/2010 NULL 2
    07/04/2010 NULL 4
    07/05/2010 4 5
    07/06/2010 NULL 6
    07/07/2010 1 8 (Sample record beeing processed)
    07/09/2010 4 2

    For each row in this table, I need to extract following:

    DT, C1, C2, and average of C2 for 2 records meeting criteria:
    with DT prior to date on the record being processed and not earlier than 5 days before that date,
    (so search window is between current – 1day and current- 5 days)
    and having C1 NULL.
    If there are more then 2 records with C1 = NULL in the window CURRENT -1 and CURRENT -5 days,
    I need to use 2 records with dates closest to DT on the record beeing processed to calculate the average.
    There may be gaps in the dates.

    So for example, if we are processing record of 07/07/2010, the search window for 2 days with C1=NULL is back to 07/02/2010 (5 days).
    There are 3 records meeting this criteriia wthin search window, and I need for average calculation, only 2 of them,
    having dt closest to the DT on the record beeing processed.

    So the output for this particular row should be:
    07/07/2010 1 8 5 (as average is (6+4)/2 because rows 07/04/2010 and 07/06/2010 are used to calc average)

    I’m working with 10g so I can’t use nth_value() with IGNORE NULLS to address the problem within single statement.
    So my question is:

    – is there a nice way to formulate single select to extract table columns along with average I need?
    – what will be the best approach to process the whole 1B+ rows?

    Thank you very much for any hints,

    Marek

  11. Tonya Dorsey

    I am researching a problem with using SQL Plus Reporting on HP Unix. Maybe you can help me. I created a report and the output is created into a .txt file. The report looks right physically when displayed. The problems is that is has breaks on in it. The user wants the reports emailed and it must have physical page breaks. I thought it we convert it to a .pdf and she opened in Adobe Reader the pages would be fine but no so. The report is continous … i.e. Page 2 is attached to the second part of page 1 on Page 2 instead of printing on page 3.

    Can I do want I am trying to do? This is all new to me. I am wasting alot of time trying to beautify this report.

    Thanks
    Tonya

  12. Laurent Schneider Post author

    try set newp 0

    $ echo "set feed off head on pages 10 lin 80 trims on newp 0 emb off tab off\nselect EMPNO,ENAME from emp;"|
    sqlplus -s -L scott/tiger|
    od -c
    0000000 \f E M P N O E N A M
    0000020 E \n - - - - - - - - - - - - -
    0000040 - - - - - - - \n 7 3
    0000060 6 9 S M I T H \n 7
    0000100 4 9 9 A L L E N \n
    0000120 7 5 2 1 W A R D \n
    0000140 7 5 6 6 J O N E S \n
    0000160 7 6 5 4 M A R T I N \n
    0000200 7 6 9 8 B L A K E \n
    0000220 7 7 8 2 C L A R K \n
    0000240 7 7 8 8 S C O T T \n
    0000260 \f E M P N O E N A M
    0000300 E \n - - - - - - - - - - - - -
    0000320 - - - - - - - \n 7 8
    0000340 3 9 K I N G \n 7 8
    0000360 4 4 T U R N E R \n
    0000400 7 8 7 6 A D A M S \n
    0000420 7 9 0 0 J A M E S \n
    0000440 7 9 0 2 F O R D \n
    0000460 7 9 3 4 M I L L E R \n
    0000476

    As you can see, there is pagefeed (\f) at the beginning of each page

  13. Sivagurunathan

    Hi,
    Am preparing for Solaris Certified Network administrator for Solaris 10. (CX 310-302) . Could pls share the guide and dumps (supposed to oracle certification) . Which could be helpful to me

    Regards
    Siva

  14. JAMAICA

    HI,

    Error: 2070 ORA-02070: database CAMBIO does not support TO_CHAR in this context..

    I get this error after upgrading from ORACLE 9.2.0.7 to 10.2.0.5. My interface ODBC between ORACLE MS SQL SERVER 2000 does not accept the TO_CHAR syntax. How can I eliminate this?

    JAMAICA

  15. Eugene

    Hi Laurent,
    Not sure if this is the right place to ask, but will ask anyway. I need to be able to load XML file that is sent to us from Amazon.com into Oracle d/base. Here is the sample file:

    305-3413222-1220037

    B0050FWB6G
    06782084248170
    Y18WW31_173Y_29
    SURVIVOR SHORT
    1
    0

    USD
    19.99

    USD
    0.00

    USD
    4.49

    USD
    0.00

    USD
    0.00

    USD
    0.00

    BKF50FWBAA
    06782084215425
    Y18WW81_100Y_30
    SURVIVOR PANT
    1
    0

    USD
    17.99

    USD
    0.00

    USD
    4.49

    USD
    0.00

    USD
    0.00

    USD
    0.00

    5df298d9-b6b7-4a89-a97f-42b8c971a553

    What would you recommend?
    Thanks,
    Eugene

  16. Laurent Schneider Post author

    XMLTYPE?

    create table t(c xmltype);
    insert into t(c) values (xmltype('xxx'));

    But it really depends to what you want to do with your xml. Maybe varchar2 or clob would be good enough, if it is not processed by the database engine.

  17. Nithya

    am trying to use column_value.getstringval() to get data from comma-separated list within a package.

    Below is the block of code in my package

    INSERT INTO ABC
    SELECT
    substr(a.applid, 1, length(a.applid)-2), sysdate
    from
    (select (column_value).getstringval() as applid
    from xmltable(‘”abc”,”bcd”,”cde”,”efg”,”hij”‘)
    ) a
    where
    UPPER(a.applid) not in
    (select
    distinct applid
    from
    MQSOXCNTL
    where
    UPPER(applid) like ‘%WS’
    and UPPER(applid) not in (‘ABCWS’,’BCDWS’,’EFGWS’)
    and (endtmstmp >= l_LastLoadTXN AND endtmstmp >= (sysdate – 1))
    )

    But when I compile the package, am getting the below error

    ORA-SQL-PLSQL-S92

    (S92) Expecting: (+) * ** + , – / || AS AT BULK DAY FROM identifier INTO MOD REM YEAR

    When I try the query alone, am able to retreive the data. But when I include the same in package, it is does’nt compile

    select (column_value).getstringval() as applid
    from xmltable(‘”abc”,”bcd”,”cde”,”efg”,”hij”‘)

    I’m using Oracle 11.2.0.2.0 edition

    Do you have any idea as to how I can rectify this?

    Thanks,
    Nithya

  18. Laurent Schneider Post author

    not sure, the syntax sound valid

    SQL> drop table abc
    Table dropped.
    SQL> create table abc(value varchar2(10),timestamp date)
    Table created.
    SQL> drop table MQSOXCNTL
    Table dropped.
    SQL> create table MQSOXCNTL(applid varchar2(10),endtmstmp date)
    Table created.
    SQL> declare
    l_LastLoadTXN date;
    begin
    INSERT INTO ABC
    SELECT
    substr(a.applid, 1, length(a.applid)-2), sysdate
    from
    (select (column_value).getstringval() as applid
    from xmltable('"abc","bcd","cde","efg","hij"')
    ) a
    where
    UPPER(a.applid) not in
    (select
    distinct applid
    from
    MQSOXCNTL
    where
    UPPER(applid) like '%WS'
    and UPPER(applid) not in ('ABCWS','BCDWS','EFGWS')
    and (endtmstmp >= l_LastLoadTXN AND endtmstmp >= (sysdate-1))
    );
    end;
    PL/SQL procedure successfully completed.
    SQL> commit
    Commit complete.
    SQL> select * from abc

    VALUE TIMESTAMP
    ---------- ---------
    a 13-DEZ-11
    b 13-DEZ-11
    c 13-DEZ-11
    e 13-DEZ-11
    h 13-DEZ-11

    5 rows selected.

    tested with 11.2.0.2

  19. Nithya

    I tried it out in TOAD first.

    When I ran as a script (press F5), it gave the error.

    When I selected the package as a whole and press F9, it compiled without warnings or errors.

    Tried with SQL plus too, but failed again.

    Guess, I might have to use some other way to get data from comma-separated list.

    Thanks for the quick response!

    -Nithya

  20. Henry

    Oracle’s ODCIAggregate routines, documented at http://docs.oracle.com/cd/E11882_01/appdev.112/e10765/aggr_functions.htm allow a user to define a function that can be called in analytic or aggregate fashion. Conveniently, the analytic style call allows the use of the analytic_clause. However, the aggregate style does not support the use of a windowing_clause. So, while one can take advantage of the analytic_clause to present rows in a specified order, there is no way to do so when calling in aggregate style, and no efficient way to sort them from within the function context.

    It seems a strange inconsistency that Oracle would support the analytic_clause but not the windowing_clause. Any thoughts as to why?

    Thanks very much!

    Henry

Leave a Reply

Your email address will not be published. Required fields are marked *