• A pci config write may remap the vga linear frame buffer, confusing the
    memory slot dirty logging logic.
    
    Fixed Windows with -vga std.
    
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Sigend-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6852 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Otherwise, slot tracking gets confused.
    
    This fixes a screen corruption bug with Ubuntu guest installation.
    
    Signed-off-by: Glauber Costa <glommer@redhat.com>
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6851 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6850 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • When checking that the size of the control virtqueue return field
    is sufficient, use the correct sg list.
    
    Signed-off-by: Alex Williamson <alex.williamson@hp.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6845 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Dir »
  • This patch adds and uses #defines for the remaining hardcoded PCI
    device IDs.  It also moves definitions taken from linux/pci_ids.h
    into a separate header (hw/pci_ids.h), removes the 'RTL' from
    PCI_DEVICE_ID_REALTEK_RTL8029, and renames PCI_DEVICE_ID_FSL_E500
    to PCI_DEVICE_ID_MPC8533E to match Linux's definition.
    
    Changes in v2:
     * Don't use C99-style comments
     * Move definitions from linux/pci_ids.h into a separate header
     * Rename PCI_DEVICE_ID_FSL_E500 to PCI_DEVICE_ID_MPC8533E
    
    Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6841 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Hi all,
    since vga_draw_graphic is only called by vga_hw_update when the console
    associated with the graphic card is active, we don't need to check if
    the current console is active using is_graphic_console.
    
    I suspect I introduced these checks when the console switching mechanism
    didn't work as it does now.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6840 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Hi all,
    this patch adds a DisplayAllocator interface that allows display
    frontends (sdl in particular) to provide a preallocated display buffer
    for the graphical backend to use.
    
    Whenever a graphical backend cannot use
    qemu_create_displaysurface_from because its own internal pixel format
    cannot be exported directly (text mode or graphical mode with color
    depth 8 or 24), it creates another display buffer in memory using
    qemu_create_displaysurface and does the conversion.
    This new buffer needs to be blitted into the sdl surface buffer every time
    we need to update portions of the screen.
    We can avoid this using the DisplayAllocator interace: sdl provides its
    own implementation of qemu_create_displaysurface, giving back the sdl
    surface buffer directly (as we used to do before the DisplayState
    changes).
    Since the buffer returned by sdl could be in bgr format we need to put
    back in the handlers of that case.
    
    This approach is good if the two following conditions are true:
    
    1) the sdl surface is a software surface that resides in main memory;
    
    2) the host display color depth is either 16 or 32 bpp.
    
    If first condition is false we can have bad performances using sdl
    and vnc together.
    If the second condition is false performances are certainly not going to
    improve but they shouldn't get worse either.
    
    The first condition is always true, at least on linux/X11 systems; but I
    believe is true also on other platforms.
    The second condition is true in the vast majority of the cases.
    
    This patch should also have the good side effect of solving the sdl
    2D slowness malc was reporting on MacOS, because SDL_BlitSurface is not
    going to be called anymore when the guest is in text mode or 24bpp.
    However the root problem is still present so I suspect we may
    still see some slowness on MacOS when the guest is in 32 or 16 bpp.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6839 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »

  • From: Xiantao Zhang <xiantao.zhang@intel.com>
    Date: Tue, 3 Mar 2009 13:33:13 +0800
    Subject: [PATCH] Split ioapic logic from the current apic.
    
    Add a new ioapic.c to hold ioapic's logic, and also
    make it work for ia64.
    
    Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
    ---
     Makefile.target |    2 +-
     hw/apic.c       |  237 +++----------------------------------------------
     hw/ioapic.c     |  263 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
     hw/pc.h         |    5 +-
     4 files changed, 281 insertions(+), 226 deletions(-)
     create mode 100644 hw/ioapic.c
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6827 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • When a scsi device is backed by a scsi generic device instead of an
    ordinary host block device, the block API is abused in a couple of annoying
    ways:
    
     - nb_sectors is negative, and specifies a byte count instead of a sector count
     - offset is ignored, since scsi-generic is essentially a packet protocol
    
    This overloading makes hacking the block layer difficult.  Remove it by
    introducing a new explicit API for scsi-generic devices.  The new API
    is still backed by the old implementation, but at least the users are
    insulated.
    
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6822 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »