Skip to content

Commit

Permalink
Merge ../linus
Browse files Browse the repository at this point in the history
  • Loading branch information
Dave Jones committed Apr 18, 2006
2 parents 530515a + 385910f commit f1f76af
Show file tree
Hide file tree
Showing 2,041 changed files with 94,832 additions and 80,561 deletions.
6 changes: 3 additions & 3 deletions CREDITS
Original file line number Diff line number Diff line change
Expand Up @@ -3382,7 +3382,7 @@ S: Germany

N: Geert Uytterhoeven
E: geert@linux-m68k.org
W: http://home.tvd.be/cr26864/
W: http://users.telenet.be/geertu/
P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6
D: m68k/Amiga and PPC/CHRP Longtrail coordinator
D: Frame buffer device and XF68_FBDev maintainer
Expand All @@ -3392,8 +3392,8 @@ D: Amiga Buddha and Catweasel chipset IDE
D: Atari Falcon chipset IDE
D: Amiga Gayle chipset IDE
D: mipsel NEC DDB Vrc-5074
S: Emiel Vlieberghlaan 2A/21
S: B-3010 Kessel-Lo
S: Haterbeekstraat 55B
S: B-3200 Aarschot
S: Belgium

N: Chris Vance
Expand Down
49 changes: 36 additions & 13 deletions Documentation/DMA-API.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size,

Consistent memory is memory for which a write by either the device or
the processor can immediately be read by the processor or device
without having to worry about caching effects.
without having to worry about caching effects. (You may however need
to make sure to flush the processor's write buffers before telling
devices to read that memory.)

This routine allocates a region of <size> bytes of consistent memory.
it also returns a <dma_handle> which may be cast to an unsigned
Expand Down Expand Up @@ -304,12 +306,12 @@ dma address with dma_mapping_error(). A non zero return value means the mapping
could not be created and the driver should take appropriate action (eg
reduce current DMA mapping usage or delay and try again later).

int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
int
pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
int
dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction)
int
pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)

Maps a scatter gather list from the block layer.

Expand All @@ -327,12 +329,33 @@ critical that the driver do something, in the case of a block driver
aborting the request or even oopsing is better than doing nothing and
corrupting the filesystem.

void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
void
pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
With scatterlists, you use the resulting mapping like this:

int i, count = dma_map_sg(dev, sglist, nents, direction);
struct scatterlist *sg;

for (i = 0, sg = sglist; i < count; i++, sg++) {
hw_address[i] = sg_dma_address(sg);
hw_len[i] = sg_dma_len(sg);
}

where nents is the number of entries in the sglist.

The implementation is free to merge several consecutive sglist entries
into one (e.g. with an IOMMU, or if several pages just happen to be
physically contiguous) and returns the actual number of sg entries it
mapped them to. On failure 0, is returned.

Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
accessed sg->address and sg->length as shown above.

void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction direction)
void
pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)

unmap the previously mapped scatter/gather list. All the parameters
must be the same as those and passed in to the scatter/gather mapping
Expand Down
26 changes: 19 additions & 7 deletions Documentation/DMA-mapping.txt
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,15 @@ translating each of those pages back to a kernel address using
something like __va(). [ EDIT: Update this when we integrate
Gerd Knorr's generic code which does this. ]

This rule also means that you may not use kernel image addresses
(ie. items in the kernel's data/text/bss segment, or your driver's)
nor may you use kernel stack addresses for DMA. Both of these items
might be mapped somewhere entirely different than the rest of physical
memory.
This rule also means that you may use neither kernel image addresses
(items in data/text/bss segments), nor module image addresses, nor
stack addresses for DMA. These could all be mapped somewhere entirely
different than the rest of physical memory. Even if those classes of
memory could physically work with DMA, you'd need to ensure the I/O
buffers were cacheline-aligned. Without that, you'd see cacheline
sharing problems (data corruption) on CPUs with DMA-incoherent caches.
(The CPU could write to one word, DMA would write to a different one
in the same cache line, and one of them could be overwritten.)

Also, this means that you cannot take the return of a kmap()
call and DMA to/from that. This is similar to vmalloc().
Expand Down Expand Up @@ -194,7 +198,7 @@ document for how to handle this case.
Finally, if your device can only drive the low 24-bits of
address during PCI bus mastering you might do something like:

if (pci_set_dma_mask(pdev, 0x00ffffff)) {
if (pci_set_dma_mask(pdev, DMA_24BIT_MASK)) {
printk(KERN_WARNING
"mydev: 24-bit DMA addressing not available.\n");
goto ignore_this_device;
Expand All @@ -212,7 +216,7 @@ functions (for example a sound card provides playback and record
functions) and the various different functions have _different_
DMA addressing limitations, you may wish to probe each mask and
only provide the functionality which the machine can handle. It
is important that the last call to pci_set_dma_mask() be for the
is important that the last call to pci_set_dma_mask() be for the
most specific mask.

Here is pseudo-code showing how this might be done:
Expand Down Expand Up @@ -284,6 +288,11 @@ There are two types of DMA mappings:

in order to get correct behavior on all platforms.

Also, on some platforms your driver may need to flush CPU write
buffers in much the same way as it needs to flush write buffers
found in PCI bridges (such as by reading a register's value
after writing it).

- Streaming DMA mappings which are usually mapped for one DMA transfer,
unmapped right after it (unless you use pci_dma_sync_* below) and for which
hardware can optimize for sequential accesses.
Expand All @@ -303,6 +312,9 @@ There are two types of DMA mappings:

Neither type of DMA mapping has alignment restrictions that come
from PCI, although some devices may have such restrictions.
Also, systems with caches that aren't DMA-coherent will work better
when the underlying buffers don't share cache lines with other data.


Using Consistent DMA mappings.

Expand Down
2 changes: 1 addition & 1 deletion Documentation/DocBook/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# This makefile is used to generate the kernel documentation,
# primarily based on in-line comments in various source files.
# See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how
# to ducument the SRC - and how to read it.
# to document the SRC - and how to read it.
# To add a new book the only step required is to add the book to the
# list of DOCBOOKS.

Expand Down
1 change: 0 additions & 1 deletion Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,6 @@ X!Earch/i386/kernel/mca.c
<chapter id="sysfs">
<title>The Filesystem for Exporting Kernel Objects</title>
!Efs/sysfs/file.c
!Efs/sysfs/dir.c
!Efs/sysfs/symlink.c
!Efs/sysfs/bin.c
</chapter>
Expand Down
49 changes: 44 additions & 5 deletions Documentation/DocBook/libata.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -120,14 +120,27 @@ void (*dev_config) (struct ata_port *, struct ata_device *);
<programlisting>
void (*set_piomode) (struct ata_port *, struct ata_device *);
void (*set_dmamode) (struct ata_port *, struct ata_device *);
void (*post_set_mode) (struct ata_port *ap);
void (*post_set_mode) (struct ata_port *);
unsigned int (*mode_filter) (struct ata_port *, struct ata_device *, unsigned int);
</programlisting>

<para>
Hooks called prior to the issue of SET FEATURES - XFER MODE
command. dev->pio_mode is guaranteed to be valid when
->set_piomode() is called, and dev->dma_mode is guaranteed to be
valid when ->set_dmamode() is called. ->post_set_mode() is
command. The optional ->mode_filter() hook is called when libata
has built a mask of the possible modes. This is passed to the
->mode_filter() function which should return a mask of valid modes
after filtering those unsuitable due to hardware limits. It is not
valid to use this interface to add modes.
</para>
<para>
dev->pio_mode and dev->dma_mode are guaranteed to be valid when
->set_piomode() and when ->set_dmamode() is called. The timings for
any other drive sharing the cable will also be valid at this point.
That is the library records the decisions for the modes of each
drive on a channel before it attempts to set any of them.
</para>
<para>
->post_set_mode() is
called unconditionally, after the SET FEATURES - XFER MODE
command completes successfully.
</para>
Expand Down Expand Up @@ -230,6 +243,32 @@ void (*dev_select)(struct ata_port *ap, unsigned int device);

</sect2>

<sect2><title>Private tuning method</title>
<programlisting>
void (*set_mode) (struct ata_port *ap);
</programlisting>

<para>
By default libata performs drive and controller tuning in
accordance with the ATA timing rules and also applies blacklists
and cable limits. Some controllers need special handling and have
custom tuning rules, typically raid controllers that use ATA
commands but do not actually do drive timing.
</para>

<warning>
<para>
This hook should not be used to replace the standard controller
tuning logic when a controller has quirks. Replacing the default
tuning logic in that case would bypass handling for drive and
bridge quirks that may be important to data reliability. If a
controller needs to filter the mode selection it should use the
mode_filter hook instead.
</para>
</warning>

</sect2>

<sect2><title>Reset ATA bus</title>
<programlisting>
void (*phy_reset) (struct ata_port *ap);
Expand Down Expand Up @@ -666,7 +705,7 @@ and other resources, etc.

<sect1><title>ata_scsi_error()</title>
<para>
ata_scsi_error() is the current hostt->eh_strategy_handler()
ata_scsi_error() is the current transportt->eh_strategy_handler()
for libata. As discussed above, this will be entered in two
cases - timeout and ATAPI error completion. This function
calls low level libata driver's eng_timeout() callback, the
Expand Down
2 changes: 1 addition & 1 deletion Documentation/acpi-hotkey.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ specific hotkey(event))
echo "event_num:event_type:event_argument" >
/proc/acpi/hotkey/action.
The result of the execution of this aml method is
attached to /proc/acpi/hotkey/poll_method, which is dnyamically
attached to /proc/acpi/hotkey/poll_method, which is dynamically
created. Please use command "cat /proc/acpi/hotkey/polling_method"
to retrieve it.

Expand Down
27 changes: 12 additions & 15 deletions Documentation/feature-removal-schedule.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,14 +71,6 @@ Who: Mauro Carvalho Chehab <mchehab@brturbo.com.br>

---------------------------

What: remove EXPORT_SYMBOL(panic_timeout)
When: April 2006
Files: kernel/panic.c
Why: No modular usage in the kernel.
Who: Adrian Bunk <bunk@stusta.de>

---------------------------

What: remove EXPORT_SYMBOL(insert_resource)
When: April 2006
Files: kernel/resource.c
Expand Down Expand Up @@ -127,13 +119,6 @@ Who: Christoph Hellwig <hch@lst.de>

---------------------------

What: EXPORT_SYMBOL(lookup_hash)
When: January 2006
Why: Too low-level interface. Use lookup_one_len or lookup_create instead.
Who: Christoph Hellwig <hch@lst.de>

---------------------------

What: CONFIG_FORCED_INLINING
When: June 2006
Why: Config option is there to see if gcc is good enough. (in january
Expand Down Expand Up @@ -241,3 +226,15 @@ Why: The USB subsystem has changed a lot over time, and it has been
Who: Greg Kroah-Hartman <gregkh@suse.de>

---------------------------

What: find_trylock_page
When: January 2007
Why: The interface no longer has any callers left in the kernel. It
is an odd interface (compared with other find_*_page functions), in
that it does not take a refcount to the page, only the page lock.
It should be replaced with find_get_page or find_lock_page if possible.
This feature removal can be reevaluated if users of the interface
cannot cleanly use something else.
Who: Nick Piggin <npiggin@suse.de>

---------------------------
12 changes: 11 additions & 1 deletion Documentation/filesystems/vfs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -694,7 +694,7 @@ struct file_operations
----------------------

This describes how the VFS can manipulate an open file. As of kernel
2.6.13, the following members are defined:
2.6.17, the following members are defined:

struct file_operations {
loff_t (*llseek) (struct file *, loff_t, int);
Expand Down Expand Up @@ -723,6 +723,10 @@ struct file_operations {
int (*check_flags)(int);
int (*dir_notify)(struct file *filp, unsigned long arg);
int (*flock) (struct file *, int, struct file_lock *);
ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, size_t, unsigned
int);
ssize_t (*splice_read)(struct file *, struct pipe_inode_info *, size_t, unsigned
int);
};

Again, all methods are called without any locks being held, unless
Expand Down Expand Up @@ -790,6 +794,12 @@ otherwise noted.

flock: called by the flock(2) system call

splice_write: called by the VFS to splice data from a pipe to a file. This
method is used by the splice(2) system call

splice_read: called by the VFS to splice data from file to a pipe. This
method is used by the splice(2) system call

Note that the file operations are implemented by the specific
filesystem in which the inode resides. When opening a device node
(character or block special) most filesystems will call special
Expand Down
Loading

0 comments on commit f1f76af

Please sign in to comment.