|
Be
Driven
|
Device Drivers in the Be Os |
|
|
|
|
|
Memory |
|
Memory Fore Word
There is a nice overlap between issues on Protection and Memory
Managment (as one is performed through the other). Be sure to read
the chapter on Protection before continuing. It will help you understand
what is said here.
Also, if you want to understand what is going on underneath all
this abstraction, be sure to read the memory-X86 document. Very good
introduction for those of you who don't know.. But then of course,
if your a hardware guru, you will know a lot more than I cover.
The difference between lock_memory() and Areas.
In your world, as a programmer, you see things from a "Virtual-Address-Space".
How this maps to the Physical-Address-Space involves a lot of mirrors
that you don't ever normally get to see as a programmer.
But the operating system, does export a set of functions that let
you have a tiny-bit of control on this mapping in a really easy manner,
that also happens to work over more than one processor type. [arn't
they such nice people.]
To this end, there are 2 types of functions. Those that take already
allocated memory, and move the data behind the Virtual-Address-Space
into a suitable region of Physical Memory for DMA operations, and
the other nasty things preparing for a DMA operation.. The other set
allow you to create explict chunks of memory at a 'Physical-Address-Space'
level, and map it into your own Virtual-Address-Space in interesting
and Peculiar Areas.
So the question is, When do you use one over the other?
And the answer is that It all depends on what you are doing.
To Lock memory so it doesn't swap, use the lock_memory. This can
make things a lot nicer in device driver land, so that things arn't
getting swapped on to the hardrive.. Something that could slow things
down nastily in a device driver.
If you are doing DMA operations, you should alsways lock_memory
with the B_DMA_IO | B_READ_DEVICE combination. This patches a lot
of nasty problems with Intel DMA operations, as well as locks the
memory into place in suitable regions of memory.
Because you can use it to lock memory given to you from user space,
you can temporarily control it's behaviour so you can do something
mildly interesting with it. This eliviates the need for you to make
an 'area' and memcpy() the contents into it, to then have to send
it via DMA to your device.. (Basically you use it to avoid double
buffering!)
When you allocate memory with malloc() inside the kernel, the lock_memory
functions are automatically called for you..
[ because that's how important it is. ]
create_area() and the like allow you to have a different type of
control over a memory region. They allow you to also SHARE memory
over more than one team, something you can not do with the lock_memory
function. To this end, you can create shared memory that can be used
for DMA operations, to be accessed at any point by an interrupt function,
and other cool things.
So, in short, get familiar with both sets of functions, and don't
be shy to use them.
virtual memory, and protected memory
Virtual memory transparently extends the physical memory in such
a way that if you request a Page of memory that isn't loaded in Ram,
that the OS will go and fetch it for you..
So when creating an area, be sure to lock memory it into ram, so
it doesn't cause you hassels during an INTERRUPT of all things.
When you create your own regions, you can also given them a Read
and Write access.
Teams and Memory
Go read the section on Protection. No need to repeat this yet again.
Areas : Shared Memory Regions
On the 80386, this is accomplished by putting each task in a different
virtual address space, which converts the virtual-to-physical address
translation mapping for that task.
In short, the virtual-to-physical translation table is a big look-up-table.
A process can only access the physical memory mapped inside the table,
thus isolating it from memory else where.
Each task has it's own segment tables, and page tables, and swaping
these tables in and out that is a primary task of multi-tasking on
a single processor. (this is not talking about multi-tasking 2 tasks
over 2 processors simultaneously.. ;-)
Creating shared memory regions, exploits the mechanics of the above
situation. By creating an area of physical memory, we then map that
into our virtual address translation tables.. This is why 2 teams
can have the same physical area addressed in DIFFERENT virtual address
spaces, or alternatively the SAME virtual address.
Obtaining your Physical Memory Address for a memory region.
We have this Virtual-Address-Space and this Physical-Address-Space,
so how do i get this address so I can tell my hardware where to access???
Well, first thing is first, you must have this memory area you want
acess locked in memory, so you don't have to worry about the OS moving
the mat from under your feet.
If you are creating your own Area in memory, make sure that the
fields
B_CONTIGUOUS or B_LOMEM (For Iower 16MB
for intel peoples), is specified, and that the area is in mapped into
the B_ANY_KERNEL_ADDRESS . ( For more information read
the chapter CFuncs-OS )
else if you are given memory, use the lock_memory()
function. ( For more information read the chapter CFuncs-Kernel Export
)
So, once the memory is locked into a physical ram location that
isn't about to move, you then need to get the physical addres using
the get_memory_map() function. ( For more information
read CFuncs-Kernel Export )
If you have noticed, we have passed parameters that will create
an area that maps each Physical-Page one after the other.. If not,
you may have memory scattered across many pages, making it unsuitable
for easy DMA transfers.
For an exmample of how this is done, refer to
Be Newsletter, Volume II, Issue 25; June 24, 1998
My Address? In What Space?
By Igor Eydelnant
Be warned, that since this article, the calling parameters passed
into Create Area have changed, and thus you should read the description
for create_area() in the chapter called CFuncs-Kernel
Export.
Kernel Access to Hardware I/O Registers
I need more feedback on this?
???????????????????????????????????????????
What is the Kernel abstraction.
"The third item on the checklist is timing. Take a device that
has a command register but provides no acknowledgment that the command
has been received. If a command is written to the command register
before the device reads the first command, the first command will
be overwritten.
Since we know, however, that the device processes a command in 5
microseconds, if the driver needs to write two commands, we just put
a 5 microsecond delay, spin (5), between the two commands. In cases
when the register is used by different functions, the driver may work
fine without adding any delay -- though this is bound to break on
a future system.
"
Be Newsletter, Issue 102, December 3, 1997
A Remembrance of Things Past: Processor-Independent Device Drivers
By Arve
The spin function is still valid in Release 4.0 so don't worry yet.Need
to update with newer commands.. do some research,.
Accessing Hardware Register Locations.
All following source stolen out of the public avail code:
* etherpci.c
* Copyright (c) 1998 Be, Inc. All Rights Reserved
Umm This seems to be a Wild Kettle of Fish?
1) intel access memory through pci_module_info->write_io_???
2) ppc access memory directly...
#define VBYTE(x) *((volatile unsigned char *)(x))
Why don't they both access it through one method or the other?
Need to find documentation.
__eieio();
Direct Memory Addressing
Read the chapters on CFunc-Kernel Export on Locking and Unlocking
memory before doing any DMA operations.. It is very important. It
talks about issues with intel and DMA problems.
There are 3 modes of control.
-
Tx from one port to memory.
-
Tx from memory to one port address.
-
Tx from memory to memory.
Multiple DMA's, depending on Bus Mastering, PCI, ISA, etc.
Can somebody fill in the story??????????????????????
Current Virtual Address Space Mapping in v4.0!
Don't rely on this, but The virtual address space for each Team
is organized this way (this is valid for R4, and might change a lot
in the future):
0x00000000:
begin
0x00000000 - 0x01000000 :
no man's land.
0x01000000 - 0x7fffffff :
kernel stuff (kernel, drivers, kernel heap,
hardware registers, cache, add-ons, ...)
0x7fffffff:
end of kernel space
0x80000000
beginning of user space
0x80000000 - 0xffffffff :
app, heap, various areas, add-ons, libraries, stacks.
0xffffffff:
end
It's important to notice that the lower 2 gigs never change. No matter
which context you're in, they are always here. the higher 2 gigs however
are team-dependent.
Old Virtual Address Space Mapping pre v3.0!
And Just so you don't go PLANING things on the above, let me show
you what the memory map used to be ;-)
"
HARDWARE MEMORY MAP
The MPC105 defines the physical memory map of the system as follows:
Start Size Description
0x00000000 0x40000000 Physical RAM
0x40000000 0x40000000 Other system memory (motherboard glue regs)
0x80000000 0x00800000 ISA I/O
0x81000000 0x3E800000 PCI I/O
0xBFFFFFF0 0x00000010 PCI/ISA interrupt acknowledge
0xC0000000 0x3F000000 PCI memory
0xFF000000 0x01000000 ROM/flash
"
Issue 27, June 12, 1996
OS Writer's Cookbook
By Bob Herold
The Communal Be Documentation Site
1999 - bedriven.miffy.org |