• Maximum alignment for Win32 is 16, so don't try
    to set it to 32. Otherwise the compiler complains:
    
    exec.c:102: warning: alignment of 'code_gen_prologue'
    is greater than maximum object file alignment.  Using 16
    
    Signed-off-by: Stefan Weil <weil@mail.berlios.de>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    Stefan Weil authored
     
    Browse Code »


  • When debugging multi-threaded programs, QEMU's gdb stub would report the
    correct number of threads (the qfThreadInfo and qsThreadInfo packets).
    However, the stub was unable to actually switch between threads (the T
    packet), since it would report every thread except the first as being
    dead.  Furthermore, the stub relied upon cpu_index as a reliable means
    of assigning IDs to the threads.  This was a bad idea; if you have this
    sequence of events:
    
    initial thread created
    new thread #1
    new thread #2
    thread #1 exits
    new thread #3
    
    thread #3 will have the same cpu_index as thread #1, which would confuse
    GDB.  (This problem is partly due to the remote protocol not having a
    good way to send thread creation/destruction events.)
    
    We fix this by using the host thread ID for the identifier passed to GDB
    when debugging a multi-threaded userspace program.  The thread ID might
    wrap, but the same sort of problems with wrapping thread IDs would come
    up with debugging programs natively, so this doesn't represent a
    problem.
    
    Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
    Nathan Froyd authored
     
    Browse Code »

  • This patch adds the missing hooks to allow live migration in KVM mode.
    It adds proper synchronization before/after saving/restoring the VCPU
    states (note: PPC is untested), hooks into
    cpu_physical_memory_set_dirty_tracking() to enable dirty memory logging
    at KVM level, and synchronizes that drity log into QEMU's view before
    running ram_live_save().
    
    Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    Jan Kiszka authored
     
    Browse Code »
  • Extend kvm_physical_sync_dirty_bitmap() so that is can sync across
    multiple slots. Useful for updating the whole dirty log during
    migration. Moreover, properly pass down errors the whole call chain.
    
    Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    Jan Kiszka authored
     
    Browse File »

  • This patch fixes several typos in comments in exec.c:
    
                longet -> longer
           recommanded -> recommended
            ajustments -> adjustments
       inconsistancies -> inconsistencies
               phsical -> physical
           positionned -> positioned
           succesfully -> successfully
          regon_offset -> region_offset
    
    and also:
    
          start_region -> start_addr
    
    Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
    Stuart Brady authored
     
    Browse Code »

  • Avi Kivity wrote:
    > Suggest wrapping in a function and hiding it deep inside kvm-all.c.
    >
    
    Done in v2:
    
    ---------->
    
    If the KVM MMU is asynchronous (kernel does not support MMU_NOTIFIER),
    we have to avoid COW for the guest memory. Otherwise we risk serious
    breakage when guest pages change there physical locations due to COW
    after fork. Seen when forking smbd during runtime via -smb.
    
    Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    Jan Kiszka authored
     
    Browse Code »











  • This is a backport of the guest debugging support for the KVM
    accelerator that is now part of the KVM tree. It implements the reworked
    KVM kernel API for guest debugging (KVM_CAP_SET_GUEST_DEBUG) which is
    not yet part of any mainline kernel but will probably be 2.6.30 stuff.
    So far supported is x86, but PPC is expected to catch up soon.
    
    Core features are:
     - unlimited soft-breakpoints via code patching
     - hardware-assisted x86 breakpoints and watchpoints
    
    Changes in this version:
     - use generic hook cpu_synchronize_state to transfer registers between
       user space and kvm
     - push kvm_sw_breakpoints into KVMState
    
    Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6825 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse File »


  • env->interrupt_request is accessed as the bit level from both main code
    and signal handler, making a race condition possible even on CISC CPU.
    This causes freeze of QEMU under high load when running the dyntick
    clock.
    
    The patch below move the bit corresponding to CPU_INTERRUPT_EXIT in a
    separate variable, declared as volatile sig_atomic_t, so it should be
    work even on RISC CPU.
    
    We may want to move the cpu_interrupt(env, CPU_INTERRUPT_EXIT) case in
    its own function and get rid of CPU_INTERRUPT_EXIT. That can be done
    later, I wanted to keep the patch short for easier review.
    
    Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6728 c046a42c-6fe2-441c-8c8c-71466251a162
    aurel32 authored
     
    Browse Code »


  • KVM uses cpu_physical_memory_rw() to access the I/O devices. When a
    read or write with a length of 8-byte is requested, it is split into 2
    4-byte accesses.
    
    This has been broken in revision 5849. After this revision, only the
    first 4 bytes are actually read/write to the device, as the target
    address is changed, so on the next iteration of the loop the next 4
    bytes are actually read/written elsewhere (in the RAM for the graphic
    card).
    
    This patch fixes screen corruption (and most probably data corruption)
    with FreeBSD/amd64. Bug #2556746 in KVM bugzilla.
    
    Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6628 c046a42c-6fe2-441c-8c8c-71466251a162
    aurel32 authored
     
    Browse Code »




  • The target memory mapping API may fail if the bounce buffer resources
    are exhausted.  Add a notification mechanism to allow clients to retry
    the mapping operation when resources become available again.
    
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6395 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »
  • Devices accessing large amounts of memory (as with DMA) will wish to obtain
    a pointer to guest memory rather than access it indirectly via
    cpu_physical_memory_rw().  Add a new API to convert target addresses to
    host pointers.
    
    In case the target address does not correspond to RAM, a bounce buffer is
    allocated.  To prevent the guest from causing the host to allocate unbounded
    amounts of bounce buffer, this memory is limited (currently to one page).
    
    Signed-off-by: Avi Kivity <avi@redhat.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
    
    
    git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6394 c046a42c-6fe2-441c-8c8c-71466251a162
    aliguori authored
     
    Browse Code »