Ok, this forum topic seems to be (mostly) about how MC ver 2.0 is 'slower' than previous versions, right? But has anybody made any 'objective' benchmark testing that supports that complaint? I haven't seen any yet.
In my own experience with using this newer ver 2.0 I have NOT experienced any significant speed performance changes, so it is confusing to me how some others have apparently seen a degradation in speed. Just how many people have experienced any problem related to 'slow-down' between versions? I personally have seen some 'old' problems related to data-gaps that are *still* with us, but those problems do not manifest themselves consistently on EVERY startup of MC and my standard set of workspaces. So maybe some users are 'unlucky' enough to be seeing some aspect of an 'old' problem? I don't know the answer to that.
What I *have* seen is that 'old' issue of data being requested, and then the system hangs up, waiting for data that never comes? It would be easy for this symptom to be interpreted as very exaggerated time-delays in loading several charts, right?
I do believe that there has been acknowledgment (by TSSupport) that there is a data-gap issue that usually occurs on the weekends (when there are no real-time servers, and maybe no historical-data servers in some cases?). Could it be that some of these slowed-down-performance posts in this forum topic are really some manifestation of the old 'missing-data' issues?
I would suggest that IF there were already some 'benchmark' or 'example' workspaces provided by TSSupport on this forum, along with some objectively measured performance NUMBERS, then everybody involved would have an easier time of sorting this out, right? If such 'tests' covered both the older and the newer beta versions, and these tests results were posted in the forum-sticky topics, then we would have much better assessment of any potential changes in performance from one version to the next. Without that kind of structured assessments between versions, we will probably continue to lose some important opportunities for squashing some bugs, right?
For whatever it is worth, here are some results of my own testing, done today (Sunday). My objective was to check (i.e., 'measure') how many minutes it takes to load my 'normal' set of workspaces. I started with seven (7) workspaces, each having three or four charts each, and each chart having 1 to 5 data-series each, some with both N-tick bars and intra-day-time-bars. Most of these charts had history-backfill-days of 100 to 365 days, with a couple charts having only 10 days. My TSServer database is probably filled with data going back for about 365 days, so no history data needed to be downloaded (from eSignal, in my case). So how long did it take to 'load' all of this, and calculate all of the indicators(no real-time data needed)?? 30 MINUTES!
Should I be upset about this amount of time?
, or should I just expect that is what would be normal for being such a big size of data?
My computer stats, for reference are: Pentium4, 3GHz, with 2Gbytes-RAM, Windows XP pro. and using the MC beta 2.0.777.777.
To see what happens if I reduce the number of workspaces, I closed three of the seven workspaces, leaving just four, and then closed the MC application, saving all remaining workspaces. Then waited until MC processes were all stopped (about 1-2 minutes later), and started up the MC program again. When prompted to load all previously opened workspaces, I said 'Yes', and started the 'timer'. This time, it took only 12 MINUTES, or 1/3rd of the time. Not too surprising, right? This amount of time was much more reasonable
. Just enough enough time to drink my 3rd cup of coffee
.
Then I decided to make another 'change'. This time I reduced all of my backfill number of days to 60 days (or 10 days for one chart with 1sec bars).
Then I repeated the whole process of saving all workspaces, closing MC, waiting for all MC processes to stop, and then reloading the four workspaces, with 60 days, or less, (of intraday bars). Those all loaded in only 5 MINUTES.
Not bad. I can certainly live with that!
There was another check that I did when I experimenting with the 7 workspaces and LOTs of backfill. I did that same test twice, but the second time I closed my ZoneAlarm firewall suite, [ver 7.0.337,000] BEFORE starting the loading of those 7 workspaces. I noticed that reduced the time from 30 minutes to 25 minutes! So I suspect there is some communications between the database and the MC applications that involves the need for the firewall to be 'checking', right? This communication-link between the database and the other MC modules is via the local-TCP protocol (that is typical of client-server architectures), right?
If these results are considered (i.e., effect of closing firewall), it might shed some light on the some of the problems being experienced by some users. Namely, maybe some users have a firewall that is not configured correctly, or for some other reason is slowing down the inter-process communications?
Hopefully, TSSupport will sort this all out, and give us some more *specific* information in the way of a set of pre-defined workspaces along with their own measurements of time-to-accomplish some specific 'tests' [plus info that might define the test-bed being used]. Only then will this current topic [on loading-time-performance ] be effectively addressed, e.g., such that the users and all beta testers will be less confused, 'frustrated' and 'annoyed', etc.
Merely trying to handle all of this on the basis of one-complaint at a time will not likely be as effective as in a more structured and objective-based approach that is transparent to all involved, in my humble opinion.
Cheers
denizen2