Mail Archives: djgpp/1994/11/03/18:56:14
On Wed, 2 Nov 1994, Charles Sandmann wrote:
> > A colleague of mine who is also using DJGPP has commented that the disk I/O
> > (low level) for DJGPP objects is substantially slower than native
> > Windows or DOS I/O. The comment is that e.g. Zortech C/C++ can produce
> > binaries that give block reads and writes of large blocks at up to
> > 400kbs-1 while despite the chosen blocksize DJGPP low level reads and
> > writes reach a ceiling at around 30kbs-1. Obviously there is some
> > h/w dependency here, but I guess the ratios are comparable whatever the
> > hardware.
> >
> > I'm just off to look at the sources for read/write now, but does anyone
> > have any comments on this?
>
> I did some rather extensive benchmark testing in V2.x development on this
> subject (both with and without disk caches). I compared TCC, low-level
> DOS, and DJGPP V1.x and V2.x. A bug fix which improved V1.11 throughput
> was implemented in one of the V1.11maint releases.
>
> The short answer is our transfer buffer is too small. This is not
> changeable in V1.x but can be stubedited in V2.x. I was able to read
> over 1Mb/sec into DJGPP with V2.0 code on large contig files on a 32Kb
> transfer buffer (on a real disk, faster on RAM disk). This was actually
> about the same speed as the TCC or low level dos at that point.
>
> If you really want speed, you can use the _go32_dpmi calls to allocate
> your own 32Kb real buffer and do the reads manually.
>
Could you post some example code showing how to do this? That
may be enough for people to be content until version 2.0 is ready. :-)
Something like this would be nice:
int fastread(int fd, void *buf, int buflen)
{
...
}
Thanks,
Ed
/****************************************************************************/
/* Ed Phillips flaregun AT udel DOT edu University of Delaware */
/* Jr Systems Programmer (302) 831-6082 IT/Network and Systems Services */
/****************************************************************************/
- Raw text -