• Spotted by Blue Swirl.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5582 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Mostly code motion.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5581 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • The motivating goal behind this is to allow other tools to use the CharDriver
    code.  This patch is pure code motion except for the Makefile changes and the
    copyright/header in qemu-char.c.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5580 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • The goal of this series is to move the CharDriverState code out of vl.c and
    into its own file, qemu-char.c.  This patch moves around some declarations so
    the next patch can be pure code motion.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5579 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • With the recent changes to the main loop, we no longer have unconditional
    polling.  This means we can now sleep in select() for much longer than we
    previously did.  This patch increases our select() sleep time from 10ms to 5s
    which is effectively unlimited since we're going to wake up sooner than that
    in almost all circumstances.
    
    With this patch, I see the number of wake-ups with an idle dynamic ticks guest
    drop from 80 per second to about 15 times per second.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5578 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Tidy up win32 main loop bits, allow timeout >= 1s, and force timeout to 0 if
    there is a pending bottom half.
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5577 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5576 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • This patch makes qemu keep track of the character devices in use and
    implements a "info chardev" monitor command to print a list.
    
    qemu_chr_open() sticks the devices into a linked list now.  It got a new
    argument (label), so there is a name for each device.  It also assigns a
    filename to each character device.  By default it just copyes the
    filename passed in.  Individual drivers can fill in something else
    though.  qemu_chr_open_pty() sets the filename to name of the pseudo tty
    allocated.
    
    Output looks like this:
    
      (qemu) info chardev
      monitor: filename=unix:/tmp/run.sh-26827/monitor,server,nowait
      serial0: filename=unix:/tmp/run.sh-26827/console,server
      serial1: filename=pty:/dev/pts/5
      parallel0: filename=vc:640x480
    
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5575 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • I noticed the qemu_aio_flush was doing nothing at all. And a flood of
    cmd_writeb commands leading to a noop-invocation of qemu_aio_flush
    were executed.
    
    In short all 'memset;goto redo' places must be fixed to use the bh and
    not to call the callback in the context of bdrv_aio_read or the
    bdrv_aio_read model falls apart. Reading from qcow2 holes is possible
    with phyisical readahead (kind of breada in linux buffer cache).
    
    This is needed at least for scsi, ide is lucky (or it has been
    band-aided against this API breakage by fixing the symptom and not the
    real bug).
    
    Same bug exists in qcow of course, can be fixed later as it's less
    urgent.
    
    Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5574 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • The current DMA routines are driven by a call in main_loop_wait() after every
    select.
    
    This patch converts the DMA code to be driven by a constantly rescheduled
    bottom half.  The advantage of using a scheduled bottom half is that we can
    stop scheduling the bottom half when there no DMA channels are runnable.  This
    means we can potentially detect this case and sleep longer in the main loop.
    
    The only two architectures implementing DMA_run() are cris and i386.  For cris,
    I converted it to a simple repeating bottom half.  I've only compile tested
    this as cris does not seem to work on a 64-bit host.  It should be functionally
    identical to the previous implementation so I expect it to work.
    
    For x86, I've made sure to only fire the DMA bottom half if there is a DMA
    channel that is runnable.  The effect of this is that unless you're using sb16
    or a floppy disk, the DMA bottom half never fires.
    
    You probably should test this malc.  My own benchmarks actually show slight
    improvement by it's possible the change in timing could affect your demos.
    
    Since v1, I've changed the code to use a BH instead of a timer.  cris at least
    seems to depend on faster than 10ms polling.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5573 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Bottom halves are supposed to not complete until the next iteration of the main
    loop.  This is very important to ensure that guests can not cause stack
    overflows in the block driver code.  Right now, if you attempt to schedule a
    bottom half within a bottom half callback, you will enter an infinite loop.
    
    This patch uses the same logic that we use for the IOHandler loop to make the
    bottom half processing robust in list manipulation while in a callback.
    
    This patch also introduces idle scheduling for bottom halves.  qemu_bh_poll()
    returns an indication of whether any bottom halves were successfully executed.
    qemu_aio_wait() uses this to immediately return if a bottom half was executed
    instead of waiting for a completion notification.
    
    qemu_bh_schedule_idle() works around this by not reporting the callback has
    run in the qemu_bh_poll loop.  qemu_aio_wait() probably needs some refactoring
    but that would require a larger code audit.  idle scheduling seems like a good
    compromise.
    
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5572 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »