hi, i’m eliminating Bin in favor of Bin2
now it is my understanding that Bin2 should be faster/more efficient when it comes to data transfer and storage in the client dadataset cause it’s handled binary…
now having eliminated Bin the exe felt slower so i did a simple comparison between the 2 versions
pulling 2000 records from a large dataset takes the following amount of time
bin : 7,8 secs
bin2 : 11,8 secs
so bin2 is 4 seconds slower than bin1
i would have expected the opposite
is there something i’m not setting up right or is there a setting that i miss or is bin2 just slower?
hi, i’m eliminating Bin in favor of Bin2
Bin2 was designed to be faster.
Could you create a simple testcase where this difference is noticeable? It might be some missed optimization or a data type that needs optimization or some misconfiguration (that would mean that some default settings are not optimal and have to be adjusted). Once Eugene is back I’ll ask him to profile the app to find out what exactly causes the slowdown.
as usual a test case is not that easy… the data alone is problematic…
now the dataset is a DA3 dataset, not DA4, can that influence the behaviour?
the fact that it is so much slower is baffling
if i could have some clues on where to look for the bottleneck i could do some digging myselves…
everything except the datastreamer is the same in both builds…
so the logical explanation is that bin2 is the bottleneck
At least could you please please provide table schema, its DDL, and a bit of sample code used to fetch data?
Send it to support@ so it will be kept private.
ok i can do that
by DDL you mean like SDAC?
also i saw $IFDEF BIN2DEBUG_time in the readdataset code so i’ll activate that one to see what gives
Data Definition Language. Usually this is an acronym for
CREATE TABLE SQL statement that defines a database table.
That would greatly help, thanks. Could you gather time data from the server-side as well? This define is used in the WriteDataset method too.
omg totally read DDL the wrong way
the outputdebugstrings are pansichar instead of pwidechar (D10)
 TDABIN2DataStreamer.InternalDoWriteDataset:00:00:01 | 1,73958324012347E-5
 TDABIN2DataStreamer.DoReadDataset:00:00:00 | 0 || 00:00:08 | 9,55092618823983E-5
but the InternalDoReadDataset_StdMode is used instead of the superfast one
boils down to
if not Supports(Destination,IDAMemDatasetBatchAdding) or Assigned(OnReadFieldValue) then
my tableobject inherits from TDACDSDataTable
i guess it needs to inherit from TDAMemDataTable…
Could you try to temporatily add a TDAMemDataTable-based table and try to fetch the same data into it?
that’s what i’m trying right now, but it seems that simply changing to TDAMem does not do the trick… debugging
that said, i would expect bin1 performance in bin2 when the high perf isn’t available…
apparently onreadfieldvalue is assigned and therefore it is using the std readdataset
what is assigning this onreadfieldvalue event?
nvmd, found the culprit
ok so here goes,
did some extensive testing and it seems a lot of time gets consumed in
dataset consists of 90 fields and 5000 records
this method takes 1.7 seconds to complete
the good news is that the readdataset into a TDaMemDataset only takes 0.583 secs, compared to a TDaCDSdataset’s 9 secs (standard method vs superfast one)
since i’m optimising it seems that 1.7 secs for 5000 rows is still a long time
i’m using SDAC so the source is a TDAESDACQuery…
any thoughts where i can look into to see if we can do the serialization faster?
i see DEFINE ENABLE_DIRECTMODE in the AnyDac driver unit
enabled it uses NativeFields
is that a faster way to get to the data?
using the SDAC driver all has to go trough variant operations which take their time i guess…
not sure what the drawback/effect is of NativeFields…
can you have a look at the other questions?
Bin2 has better performance with TDAMemDataTable. it has different optimizations.
I don’t recommend to use direct mode in Anydac/Firedac - it wasn’t tested with latest FireDAC so it may generate wrong data. probably we remove it at all.
in your case, you retrieve 90*5000 = 450000 fields so 1.7sec can be a good result for such amount of data.
Note: this data should be serialized to stream with bin2, later serialized with BinMessage and finally deserialied with Binmessage and bin2 streamer …
doing bin2 with binmessage, both server and client
but looking at the code all values are converted from variant, and i guess that’s where the bottleneck is?
the time needed for the data to get into the sdac dataset is neglectable so it seems
i’m just wondering if there are optimizations that can be done
that’s why i was looking at the direct mode in anydac…
we get complaints that our soft is getting slower, the reason is simple, we need to pull in more data to be able to do the same
and we needed to get rid of bin1streamer…
the first test was dramatic, bin2 with our legacy TDACDSdataset descendants was taking the double amount of time needed compared to bin1, due to the standard streaming mode
bin2 deserialisation in combo with TDAMemDataset is neglectable, so that is a big plus
but i need to convert all tdacds to tdamem
in the mean time i can’t rollout bin2 cause it is way to slow when not using it in combo with tdamem
anyway, since we are putting time and effort into it, and since we find 1.7 for that amount of fields rather slow (sorry) we were looking for ways to optimise the serialization…
i’m open to experiment…
you can use
DAConverter.exe for replacing
RemObjects Data Abstract v3->v4/v5 Client Conversion Utility - Version 0.11 Syntax: DAConverter [/wait] [/moveevents] [/usebin2] [/usememtable] [/nobackup] /folder:<folder> DAConverter [/wait] [/moveevents] [/usebin2] [/usememtable] [/nobackup] <filename1> [<filename2> [...]] The specified filenames can be either the .pas, .dfm or .xfm files, the converter will always process both .pas+.dfm (or .pas+.xfm). The optional switchs: 'moveevents' moves the DT's events to the corresponding RDA. 'usebin2' replace the TDABinAdapter and TDABinDataStreamer with TDABin2DataStreamer. 'usememtable' replace the TDACDSDataTable and TDAADODataTable with TDAMemDataTable. 'nobackup' disables creating .bak files Warning: event types may need manual correction.
I think, you don’t need to have all 5000 records in the same time on client-side in grid.
you may fetch records by demand… we had the
Fetch sample in DA4… if you want, I can attach it here
in most cases we don’t need 5000 that is correct, although we have datasets that are larger who need to be processed, paging is not required in that case as long as the timings are acceptable
daconverter will probably not be usefull since we don’t do rad development, so we have a base class derived from TDaCDS which i need to convert, but there a lot of extra methods that need to be checked and a system to hook up child datasets that needs to be altered
and we still have the issue of multiple statement names which we use for the same dataset and same database connection which we had to workaround
that said, i guess changing DA3 datasets to DA4 will not speed up things for a given dataset?
and isn’t there a way around processing everything as variant while serializing?