|
Related Questions |
View Answer |
|
How we can call the routine in datastage job?explain with steps?
|
View Answer
|
|
What is job control?how it is developed?explain with steps?
|
View Answer
|
|
What is job control?how can it used explain with steps?
|
View Answer
|
|
How to find errors in job sequence?
|
View Answer
|
|
It is possible to access the same job two users at a time in datastage?
|
View Answer
|
|
If the size of the Hash file exceeds 2GB..What happens? Does it overwrite the current rows?
|
View Answer
|
|
How to drop the index befor loading data in target and how to rebuild it in data stage?
|
View Answer
|
|
How to parametarise a field in a sequential file?I am using Datastage as ETL Tool,Sequential file as source.
|
View Answer
|
|
How to kill the job in data stage?
|
View Answer
|
|
Where we use link partitioner in data stage job?explain with example?
|
View Answer
|
|
What is the difference between sequential file and a dataset? When to use the copy stage?
|
View Answer
|
|
What is the purpose of exception activity in data stage 7.5?
|
View Answer
|
|
How i create datastage Engine stop start script. Actually my idea is as below. !#bin/bash dsadm - user su - root password (encript) DSHOMEBIN=/Ascential/DataStage/home/dsadm/Ascential/DataStage/DSEngine/bin if check ps -ef | grep DataStage (client connection is there) { kill -9 PID (client connection) } uv -admin - stop > dev/null uv -admin - start > dev/null verify process check the connection echo "Started properly"
|
View Answer
|
|
What does separation option in static hash-file mean?
|
View Answer
|
|
What is the difference between Symetrically parallel processing,Massively parallel processing?
|
View Answer
|
|
How to implement slowly changing dimentions in Datastage?
|
View Answer
|
|
How to improve the performance of hash file?
|
View Answer
|
|
My requirement is like this :
Here is the codification suggested:
SALE_HEADER_XXXXX_YYYYMMDD.PSV SALE_LINE_XXXXX_YYYYMMDD.PSV
XXXXX = LVM sequence to ensure unicity and continuity of file exchanges Caution, there will an increment to implement. YYYYMMDD = LVM date of file creation
COMPRESSION AND DELIVERY TO: SALE_HEADER_XXXXX_YYYYMMDD.ZIP AND SALE_LINE_XXXXX_YYYYMMDD.ZIP
if we run that job the target file names are like this sale_header_1_20060206 & sale_line_1_20060206.
If we run next time means the target files we like this sale_header_2_20060206 & sale_line_2_20060206.
If we run the same in next day means the target files we want like this sale_header_3_20060306 & sale_line_3_20060306.
i.e., whenever we run the same job the target files automatically changes its filename to filename_increment to previous number(previousnumber + 1)_currentdate;
Please do needful by repling this question..
|
View Answer
|
|
What is the transaction size and array size in OCI stage?how these can be used?
|
View Answer
|