Would like to have microseconds supported for Oracle and MS/SQL databases for TIME and TIMESTAMP data types. Currently only milliseconds are supported.
For the Oracle Timestamp datatype I believe Gen 8.5 does support microseconds already..
I worked a recent issue where a new problem was found with microseconds belong lost on a .NET client -> .NET server dialog flow but during my research I verified the Gen server could successfully write & read a Timestamp attribute with microsecond precision to/from the Oracle database table.
The C# client-> server flow problem is fixed in GEN85011 with new generators for both client & side.
If we have a new problem reported for another generator platform I will presume we will again provide a fix.
How do you populate a timestamp with microseconds support? Because my last lot of testing (albeit granted, not at 8.5), meant that CURRENT_TIMESTAMP didn't provide a value with microseconds (at least on C), only milliseconds, with the last 3 digits always set to 0.
I was just using dummy values for the timestamp data input part of my test in Gen.
I did remember a Java web client limitation due to internal use of Calendar object but it looks you are correct about CURRENT_TIMESTAMP not supporting microseconds on C as well.
I just ran a 8.5 C based test setting an Oracle Timestamp attribute to CURRENT_TIMESTAMP during a CREATE statement and it is truncated to milliseconds.
Not sure if this applies to all Gen generated code/DBMS platforms but for Oracle the restriction maybe historical because of older limitations with Oracle Date data type (prior to Gen supporting the Oracle Timestamp data type)
Sounds like a good topic to post as another new Idea thread
This bumped to the top due to a vote ... so I'd thought I'd reply with some recent experience.
We recently identified this issue with C# and windows.
Windows can only return 10 millisecond precision depending on your OS and CPU.
CA Gen 8.5 populates all the timestamp with some numbers. However, if you execute CURRENT_TIMESTAMP in a loop it will be the same number until the next millisecond clicks over.
Fortunately for us, this timestamp was being utilised in a function to generate a slot for a system id based on a reverse timestamp ... so utilising the RandomNumber function solved our issue and simulated the precision we used to get on the mainframe. But I can see that if the design of an application depends on the timestamp being unique each time the CURRENT_TIMESTAMP is invoked, problems would occur.
One suggestion would be that maybe the extra numbers could be made somewhat random when the OS/DB cannot provide it? Yes this can lead to ordering issues ... overall it is a design issue most likely to arise from a conversion between platforms ...
That's not strictly true with Windows. It can do sub-millisecond accuracy (see Acquiring high-resolution time stamps (Windows) ). A default use of GetTickCount will tie you to 16 milliseconds, but there are a variety of ways to get a higher resolution timer. It hinges on CPU support (RDTSC, HPET) and calling the right APIs, but it is there.
So there's no reason CA couldn't support it if they wanted to, and certainly the underlying DBMS does support it.
Although this idea does not have 10 votes or more, I feel like this is kind of a basic thing that needs to be supported on all of our database platforms if, for no other reason, than to be consistent. The state has been changed to Under Review.
Retrieving data ...