Mailstation Emulation - Episode Three (Dec 30, 2009)

(home)
  1. From: "cyranojones_lalp" Dec 31, 2009
  2. From: "FyberOptic" Dec 31, 2009
  3. From: "FyberOptic" Jan 2, 2010
  4. From: "FyberOptic" Jan 3, 2010
  5. From: "cyranojones_lalp" Jan 5, 2010
  6. From: "FyberOptic" Jan 5, 2010
  7. From: "cyranojones_lalp" Jan 7, 2010
  8. From: "cyranojones_lalp" Jan 7, 2010
  9. From: "cyranojones_lalp" Jan 7, 2010
  10. From: "FyberOptic" Jan 7, 2010
  11. From: "cyranojones_lalp" Jan 9, 2010
  12. From: "FyberOptic" Jan 10, 2010
  13. From: "cyranojones_lalp" Jan 12, 2010


Subject: Re: Mailstation Emulation - Episode Three

From: "FyberOptic" <fyberoptic@...>

Dec 30, 2009


A little more progress.

Turns out, the Mailstation halting after showing t=
he logo was simply
standard routine. The firmware runs through the message=
loop checking
for things to do, and when there are no more, it HALTs the C=
PU. The
rest of the hardware stays running, while the CPU waits for an int=
errupt
to wake it up. When that happens, the interrupt routine is triggere=
d,
new events are possibly added to the message queue based on what the
int=
errupt was, and then it returns into the message loop to do it all
over aga=
in.

Originally, I was resetting the Mailstation emulation after getting
st=
uck at the logo, and then managing to get to the settings reset menu.
Turn=
s out, when I manually pumped a bunch of keyboard interrupts while
at that =
logo screen (repeatedly running the message loop again), it kept
doing thin=
gs. It eventually popped a dialog box up:

Yet again, there's no text, j=
ust like in that low battery warning I got
originally. I don't know what's=
wrong with that aspect. Maybe there's
still a bug in the Z80 emulation?? =
Anyway, eventually I deduced that
this must be a configuration error dialo=
g (since I'm using dataflash
from a different firmware version). But, I ha=
d no way to push the
button to continue.

Meanwhile, I came up with a crude=
way to emulate hardware timing, by
incrementing a counter every time a byt=
e is read or written from address
space (this happens when reading in instr=
uctions as well as data). So
from there, I was able to implement a time16 =
and a keyboard interrupt to
happen automatically, intertwined with port 3's=
interrupt mask to know
whether they should be triggered (and handle when a=
n interrupt gets
"reset" during the interrupt routine). So now the error d=
ialog
eventually popped up automatically whenever I started up the emulator=
.
But I still couldn't press enter to bypass it. Resetting the
Mailstatio=
n still took me to that "Reset Settings" menu, mind you, but I
couldn't do =
anything there either.

So, I put together a way to emulate the keyboard ma=
trix hardware
(required ORing and ANDing of values to respond to how the Ma=
ilstation
checks the whole grid at a time to see if it even needs to proces=
s
individual keys), and then quickly tacked in support for the enter key
ba=
sed on my actual keyboard's input. Due to the slowness of emulation
at the=
moment, I had to hold the key in just slightly longer than on the
real har=
dware, but it totally worked. It closed the error dialog, and
moved along:=

I really need to figure out why the text isn't printing. But yeah, it'=
s
obviously the configuration screen. I just can't type in any settings
ye=
t since the enter key is all that's tied to my actual keyboard. When
you k=
eep pressing it, the cursor jumps to each setting's area on the
screen, and=
eventually jumps to the top again. I've gotta come up with
a way to trans=
late PC scancodes to the memory array I'm using to
replicate the keyboard m=
atrix, then I should be all set on input.

Well, then I had a thought. If=
v2.53's firmware didn't like my
dataflash and wanted me to reset everythin=
g, then what would happen with
the v3.03 firmware which actually goes with =
that dataflash image? Turns
out, it must like it just fine, because I'm ge=
tting no errors. But I'm
not getting anything else, either. While sitting=
at the logo screen, it
waits for keyboard interrupts to happen for a littl=
e while, and then
eventually changes the interrupt mask to 0x39 (00111001).=
That means
it's no longer listening for the keyboard (which normally trig=
gers 64
times a second I think). From what we know (or what I know based o=
n
stuff from here), these interrupts on now are "null", "null", "time16",
a=
nd "maybe rtc". My time16 interrupt keeps going off approximately once
a s=
econd (since I believe that was the proper rate), at least. I tried
manual=
ly triggering rtc interrupts, but haven't gotten anything to
happen. So I =
don't know what's going on at this point without more
disassembling.

Here'=
s the v3.03 firmware logo if anyone's interested:

It might be worth noti=
ng that if I use garbage for dataflash with v3.03
instead of the image file=
I have of it, then I get identical behavior to
v2.53 so far: an error dial=
og after the logo, and then proceeding to the
configuration window upon pre=
ssing enter. So I'm left to assume that
even when v2.53's configuration is=
set (once I get more keyboard
support), then I'll get stuck at the startup=
somewhere too.

Anyway, I guess that's all I got for now. I just really w=
ant to fix
that text rendering problem!


A little more progress.<br><br>Turns out, the Mailstation halting after sh=
owing the logo was simply standard routine. The firmware runs through the =
message loop checking for things to do, and when there are no more, it HALT=
s the CPU. The rest of the hardware stays running, while the CPU waits for=
an interrupt to wake it up. When that happens, the interrupt routine is t=
riggered, new events are possibly added to the message queue based on what =
the interrupt was, and then it returns into the message loop to do it all o=
ver again.<br><br>Originally, I was resetting the Mailstation emulation aft=
er getting stuck at the logo, and then managing to get to the settings rese=
t menu. Turns out, when I manually pumped a bunch of keyboard interrupts w=
hile at that logo screen (repeatedly running the message loop again), it ke=
pt doing things. It eventually popped a dialog box up:<br><br><img src=3D"=
 (URL)src=
=3D"
(URL)

iginally. I don't know what's wrong with that aspect. Maybe there's still=
a bug in the Z80 emulation?? Anyway, eventually I deduced that this must =
be a configuration error dialog (since I'm using dataflash from a different=
firmware version). But, I had no way to push the button to continue.<br><=
br>Meanwhile, I came up with a crude way to emulate hardware timing, by inc=
rementing a counter every time a byte is read or written from address space=
(this happens when reading in instructions as well as data). So from ther=
e, I was able to implement a time16 and a keyboard interrupt to happen auto=
matically, intertwined with port 3's interrupt mask to know whether they sh=
ould be triggered (and handle when an interrupt gets "reset" during the int=
errupt routine). So now the error dialog eventually popped up automaticall=
y whenever I started up the emulator. But I still couldn't press enter to =
bypass it. Resetting the Mailstation still took me to that "Reset Settings=
" menu, mind you, but I couldn't do anything there either.<br><br>So, I put=
together a way to emulate the keyboard matrix hardware (required ORing and=
ANDing of values to respond to how the Mailstation checks the whole grid a=
t a time to see if it even needs to process individual keys), and then quic=
kly tacked in support for the enter key based on my actual keyboard's input=
. Due to the slowness of emulation at the moment, I had to hold the key in=
just slightly longer than on the real hardware, but it totally worked. It=
closed the error dialog, and moved along:

(URL)
bertech.net/mailstation/img/msemu_setsettings.png"><br><br>I really need to=
figure out why the text isn't printing. But yeah, it's obviously the conf=
iguration screen. I just can't type in any settings yet since the enter ke=
y is all that's tied to my actual keyboard. When you keep pressing it, the=
cursor jumps to each setting's area on the screen, and eventually jumps to=
the top again. I've gotta come up with a way to translate PC scancodes to=
the memory array I'm using to replicate the keyboard matrix, then I should=
be all set on input.<br><br><br>Well, then I had a thought. If v2.53's fi=
rmware didn't like my dataflash and wanted me to reset everything, then wha=
t would happen with the v3.03 firmware which actually goes with that datafl=
ash image? Turns out, it must like it just fine, because I'm getting no er=
rors. But I'm not getting anything else, either. While sitting at the log=
o screen, it waits for keyboard interrupts to happen for a little while, an=
d then eventually changes the interrupt mask to 0x39 (00111001). That mean=
s it's no longer listening for the keyboard (which normally triggers 64 tim=
es a second I think). From what we know (or what I know based on stuff fro=
m here), these interrupts on now are "null", "null", "time16", and "maybe r=
tc". My time16 interrupt keeps going off approximately once a second (sinc=
e I believe that was the proper rate), at least. I tried manually triggeri=
ng rtc interrupts, but haven't gotten anything to happen. So I don't know =
what's going on at this point without more disassembling.<br><br>Here's the=
v3.03 firmware logo if anyone's interested:

(URL)
fybertech.net/mailstation/img/msemu_logo303.png"><br><br>It might be worth =
noting that if I use garbage for dataflash with v3.03 instead of the image =
file I have of it, then I get identical behavior to v2.53 so far: an error =
dialog after the logo, and then proceeding to the configuration window upon=
pressing enter. So I'm left to assume that even when v2.53's configuratio=
n is set (once I get more keyboard support), then I'll get stuck at the sta=
rtup somewhere too.<br><br>Anyway, I guess that's all I got for now. I jus=
t really want to fix that text rendering problem!



1: Subject: Re: Mailstation Emulation - Episode Three

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Dec 31, 2009


the logo was simply
sage loop checking
the CPU. The
r an interrupt
s triggered,
what the
t all

I saw your message the other night, and was stuck tryi=
ng to think
of why the dialog box popped up.
And then I fell asleep before=
finishing the post I was working on...

Yeah, that's why it halts.

It n=
ever occured to me that it had emptied the event queue, and
was supposed t=
o halt. I have no guess why the text is not printing.

I was going to ment=
ion that you will not get past the splash screen
until you emulate the 60 H=
z interrupt. In addition to scanning
the keyboard, it also increments a se=
t of ten timers, and these
timers are used whenever they put something on t=
he screen that
needs to change after some delay. Such as the splash being
=
erased, and moving on to the main menu (or user select sometimes).

They se=
t a timer, and return to the os. (As Mr. Popeil says,
"you just set it...=
and forget it!!!) They never just spin in
a delay loop. And when the os=
has another event for that app,
the os calls the app, passing the event a=
s param.

Any app that uses timers also implements a response for timer
eve=
nts. The splash is a simple app that just displays
the splash image, sets=
timer, and then when it gets the timer
event, it makes a call that change=
s the current app to
either the main menu, or if there is more than one use=
r,
the select user app (or when no user accts are set up yet,
the create =
user app).

taflash

I think you can wipe the da=
taflash, and it will init it. IIRC, the
flag that holds the dataflash stat=
e is in 2nd to last sector of dataflash (about 10 bytes at start of sector,=
and nothing else in
rest of sector (or not much else????). Preserve the l=
ast sector,
that is where your serial number is stored. IIRC, the "flash
t=
est" that you can run from test mode walks on all but that
last sector wit=
h the "test data". Everything but the serial number
will be re-intiialized=
after the test. If you have any apps in
the loadable-app space, you can s=
kip wiping them, and I think
they will survive the re-init (I could be wron=
g. The flash test
does wipe that area of dataflash). It is very possible =
that
zeroing out the first 2 bytes in 2nd to last dataflash sector is
all y=
ou need to do to cause it to be re-init'ed (not sure tho).

was able to implement a time16 and a

I don't rec=
all ever finding anything that used "time16". It
just increments a 16 bit =
counter every time that int is received,
but I never found anything that us=
ed that count value.

"Time32" (named simply 'coz it was 32 bits, v/s 16 bi=
ts) is used
for a lot of stuff. It gets incremented by 16 by the same
int =
as keyscan. The keyscan int is roughly 60 Hz, or about
a 16 millisecond pe=
riod, so "time32" is roughly in milliseconds.

That 60 Hz interrupt does 3 =
things:
1) the keyscan.
2) increments time32 by 16.
3) increments each of =
ten timers......
Wait... I'll go out and come in again...

That 60 Hz inter=
rupt does 12 things:
1) the keyscan.
2) increments time32 by 16.
3 thru 12=
) increments each of ten timers by 1.

with port 3's interrupt mask
nd handle when an

It's not clear to me if the P3-out is a "mask", or if it is just
used to =
reset the corresponding bit of the register that feeds
P3-in (where P3 in b=
its are set by the various int inputs).

to translate PC scancodes to the memory array I'm using to
keyboard matrix, then I should be all set on input.

You could just=
disable the keyscan, and replace it with code
that reads an unused port, a=
nd calls the put_key_in_buffer
routine with that value. The emulation for =
that new port
could just feed the keycodes for any keys pressed on pc kbd.
=
I don't remember for sure what is stored in that keybuffer,
though. It mi=
ght be the row/col data, along with up/down/shift
info.
That would make it =
a bit harder, possibly even worse than actually
scanning an emulated keybo=
ard. Or maybe skip the keybuffer,
and just feed keyevents into the event q=
ueue? Pretty sure
those are ascii codes.
(I'm just thinkin' out loud, not =
sure any of this is a good idea.)

ng off approximately once
ate), at least.

I think=
the only one that is important at this point is the
60 Hz keyscan-etc. T=
he rtc might be important as far as
waking up the cpu at the set mail-down=
load time, and for the
date and time to be set right when you power up, bu=
t as far as emulating, prolly not too important.

g that if I use garbage for dataflash
le I have of it, then I get
dialog after the
pon

This is probably what it is supposed to do. The =
user acct data
is in the dataflash, so if the dataflash is trashed, after i=
t is
re-initialized you need to enter the user account info.

to assume that
eyboard

When=
you get the timers working right, I bet it won't get stuck!

uess that's all I got for now. I just really want
ering problem!

You need to set some breakpoints, or at least "flagpoints".=

I would prolly just compile some in to the code, but you
could also make =
commandline switches, or a config file.

The idea being to whittle down you=
r log to a comprehensible
size. So, you set some addresses that you are in=
terested to
know if it is getting to. You can have it just log the
addres=
ses on your "watch list", in the order it gets to them.

Maybe a different =
switch to stop at certain addresses. Then
you can box in the code where t=
he text is supposed to be copied,
and even dump the addresses involved (thi=
rd switch).

I bet it is gonna turn out to be some kind of banking error.
T=
here are many places where a codeflash page needs to be banked
in, just to =
copy a string from that page to a local ram var.

(OK, at the top, where I =
said "the other night", add another
night to that, coz it's now "tommorow"=
and I still have not
hit "send")

CJ


2: Subject: Re: Mailstation Emulation - Episode Three

(top)

From: "FyberOptic" <fyberoptic@...>

Dec 31, 2009


@...> wrote:
mer
image, sets timer, and then when it gets the timer
that changes the current app to
e than one user,
t,

I hadn't realized that the splash was an app to=
o. That's good to know.

ill init it. IIRC, the
last sector of
dataflash
else in

at you can run from test mode walks on all but that
"test data". Everything but the serial number
er the test. If you have any apps in
p wiping them, and I think
g. The flash test
e that
all you need to do to cause it to be re-init'ed (not sure tho).

I suppo=
se the serial number isn't really important, since you still have
to have a=
username/password to log into the email account, and I doubt
they cared wh=
at Mailstation unit you logged into the official
Mailstation email server w=
ith. I wonder if the serial is even sent to
the server when fetching/sendi=
ng mails.

Something like Tivo on the other hand has the serial on an eprom=
, since
that's tied directly to your account. Even if you replace the hard=

drive, it's still going to work with your account afterward. Though
peopl=
e have managed to clone those in order to transfer their account to
another=
one when the system board dies.

32 bits, v/s 16 bits) is used
16 by the same
ut

Ho=
w did you deduce that the keyboard interrupt was 60hz? I'm curious,
since =
I've seen you mention that before, but I have some evidence that
might prov=
e otherwise. Some of it I came to realize just yesterday,
even.

Before, w=
hen I was doing all that work on the Mailstation and hooking
the ISR for te=
sting things, I placed a counter variable inside the
keyboard loop. All it=
did was count up. I called it kbdtest. In the
time16 interrupt, I would =
copy the value of kbdtest into kbdmax, then
reset kbdtest to 0. kbdmax wou=
ld be displayed on the screen in separate
code outside the ISR. Since time=
16 apparently hit at exactly 1 second
intervals (because I believe I timed =
it by hand as such), then kbdmax
would be a semi-accurate way of determinin=
g the speed of the keyboard
interrupt.

As it turned out, kbdmax was result=
ing in a constant value of 64. So
the keyboard interrupt appeared to be ha=
ppening 64 times a second, 64hz,
etc.

And now recently, when searching for=
possible info on the RTC of the
Mailstation (or one similar), I came upon =
something rather interesting.
It appears that many RTC chips have a progra=
mmable square wave
generator, in hz. With values like 16, 32, 64, 128, 256=
, etc. This
made me think that maybe the keyboard interrupt is being gener=
ated by
programmable RTC.

More possible proof of this is when I was tinker=
ing with port 0x2F a
long time ago. This is what I learned back then:

- S=
etting bits 4,6 makes time16 interrupt 2x slower (kbdmax =3D 128)
- Setting=
bits 5,6 makes time16 interrupt 4x slower (kbdmax =3D 256)
- Setting bits =
4,5,6 makes time16 interrupt 8x slower (kbdmax =3D 512)
- When bit 6 is cle=
ar, but 4, 5, or both are set, time16 interrupt
doesn't seem to ever occur.=

Well, back then, I naturally assumed that changing 0x2F was affecting
the=
time16 interval. But now, after learning of these programmable
square wav=
es, maybe 0x2F is changing the speed of the keyboard
interrupt, not the tim=
e16 one. It would make sense if so. Let me
clarify:

Now remember, kbdtes=
t was incrementing in the keyboard interrupt, and
kbdmax was saving this va=
lue in the time16 interrupt. Original
assumption was 0x2F was slowing time=
16, hence more opportunity for
kbdtest to reach a higher value before time1=
6 hit and saved the value.
Well, what if you turn that around, and assume =
that it's affecting the
keyboard interrupt instead of time16, making it hap=
pen FASTER, thereby
causing kbdtest to count faster, which is then recorded=
to kbdmax at
what is likely still the normal 1 second interval of time16.
=

If so, that would mean:
- Setting bits 4,6 makes keyboard interrupt happen=
128hz
- Setting bits 5,6 makes keyboard interrupt happen at 256hz
- Settin=
g bits 4,5,6 makes keyboard interrupt happen at 512hz

These values corresp=
ond to what many RTC square waves are capable of
emitting (along with the 6=
4hz I've assumed Mailstation normally runs the
keyboard at). I looked at s=
everal RTC chips, and many had this
programmability, but I couldn't ever fi=
nd one with similar registers as
what the Mailstation's uses. Particularly=
, they store the two BCD
values for secs/mins/hours/etc in a single byte at=
a particular I/O
port, where as the Mailstation seems to store each indivi=
dual BCD digit
in two separate ports, based on what's been documented so fa=
r.

Anyway, I guess the only way to prove any of this is true would be for
=
me to display the value of time16 on the screen constantly, while
changing =
0x2F. If 0x2F is in fact affecting the speed of the keyboard
interrupt, th=
en printing time16 on the screen constantly would still
show its value upda=
ting in 1 second increments no matter what. If my
new assumption is wrong,=
meaning it's affecting time16 instead like I
originally assumed, then the =
counter on the screen would happen slower.
I'll try this sooner or later t=
o see how it goes.

Either way, I just wanted to point out why I think norm=
ally the keyboard
interrupt happens at 64hz instead of 60, but I'm still cu=
rious of your
reasoning in case I'm still missing something.

en automatically, intertwined with port 3's interrupt mask
her they should be triggered (and handle when an
during the interrupt routine).
"mask", or if it is just
ster that feeds
).

I think I may have tested this before, but I don't remember. Either
wa=
y, all a person needs to do is hook the ISR, and make a value
increment in,=
say, the keyboard interrupt. Then change the interrupt
mask to disable ke=
yboard interrupts. If the value stops counting, then
you know the mask is =
in fact disabling that interrupt. Whenever I get
around to writing the cod=
e again to check the time16 rate when changing
0x2F, I'll check this too.

=

de
ith that value. The emulation for that new port
odes for any keys pressed on pc kbd.
ored in that keybuffer,
up/down/shift
se than actually
fer,
re ascii codes.
od idea.)

It's funny you even mention that, because honestly th=
at was my first
thought: to just dump keys straight into the buffer. But t=
his would
only work for emulating the Mailstation OS, and I eventually want=
all of
my custom code to work with it as closely to the real hardware as
p=
ossible. So I did end up creating a translation table, which wasn't as
bad=
as I thought, actually. I used an array of 10 rows/8 columns, which
store=
s the PC scancode for each associated key of the Mailstation key
matrix. I=
have to scan through the array's rows and columns every time
a key is pres=
sed/released to match it with a scancode in the array (so
80 iterations, to=
ps). But once it's found, I can then easily take the
row/column values fro=
m the loop to update the actual bitwise matrix
array (which is just 10 byte=
s representing the rows, since each column
is an individal bit), which is w=
hat I use to then actually emulate the
output of port 1 (based on the input=
of port 1, 2.0, and 2.1).

Not every MS key is emulated yet, and some have=
been put elsewhere
("Home" is the Home key, though "Back" is the End key).=
I'm going to
emulate "Function" as the Control key eventually too, but fo=
r now
control combos are how I send special commands to my emulator, so I'l=
l
have to change that.

point is the
waking up the cpu at the set mail-download time, and for the
e to be set right when you power up, but as far as
emulating, prolly not to=
o important.

As I mentioned earlier, I was looking into a lot of different=
RTC chips,
and most of them did have an alarm feature. So that might be t=
ied to
that interrupt strictly to wake it up to check for mail, as you
ment=
ioned. Makes a lot of sense.

hen v2.53's configuration is set (once I get more keyboard
en I'll get stuck at the startup somewhere too.
working right, I bet it won't get stuck!

Well that's the thing, I do have=
timers working.

But, now that the keyboard is emulated, at a cold boot I =
can enter
configuration info (even though I can't see it as I type it, unle=
ss I
type so much that it scrolls off to the right; but I can see password
=
asterisks fine). I save, and get to the user selection screen:

and then=
to the main menu:

I can even use most of the items in the menu withou=
t issue (aside from
some missing text at times, and the create new mail app=
crashing). This
is on v2.53 firmware btw.

But, when I soft-reset after c=
onfiguring it, my original assumption was
correct: v2.53 sticks at the spla=
sh screen, just like v3.03 did with the
proper dataflash configuration alre=
ady there. I even tried changing the
emulator to always fire keyboard/time=
16 interrupts regardless of the
interrupt mask, and it makes no difference.=

I discovered something shortly ago, however. I was originally assuming
t=
hat the Mailstation was changing the interrupt mask from 0x22 to 0x39.
But=
I added in a feature to the emulator to dump ram with a keypress.
So, I d=
ump ram while the interrupt mask is still 0x22, and then again
when the mas=
k changes to 0x39. Turns out, it's not changing the mask.
It's changing E=
VERYTHING to 0x39. Ram page 1 is totally full of it, and
page 0 is almost =
entirely, aside from values which I assume are getting
set during the messa=
ge queue loop and such when the interrupt hits.

And you know what? I bet =
I just figured out what it is, because there's
a few 0x39s even in my first=
ram dump. Right before they start, there's
"Jan ". I bet it's reading th=
e RTC and I'm returning invalid value(s)!

YEP! I just now tried it, retur=
ning 0x01 for ports 0x10 through 0x1C,
and now it warm boots just fine! Ev=
en the create new mail app works now
(since it was prolly reading the date/=
time to know what to put in the
email).

All unhandled IO ports are actuall=
y just handled like RAM: stored in and
returned with an array, which I zero=
out at startup. So it was
returning 0 for all RTC values originally, whic=
h obviously was breaking
something. I think I'm actually going to tie the =
Mailstation RTC to my
PC's clock so that it's always correct, once I figure=
out how to
represent all the values (and converted to BCD).

So now, aside=
from things like the modem, printer port, etc, it seems
everything is work=
ing enough for the Mailstation to not complain, aside
from the missing text=
in places.

n
even dump the addresses involved (third switch).

Yesterday, I changed the =
emulator back to return low battery status, so
that I could get that "The b=
attery power is running low, the system will
power off automatically." erro=
r. I figured using this for debugging
would be best because it happens dur=
ing the startup, before lots of
other junk clutters my debug log. Anyhow, =
I created a ram dump right
after the "low battery" message box appeared. T=
urns out, it's really
getting the text string it needs to print, because it=
's in two different
memory locations, which I've traced back to the code wr=
iting them there.
All I can figure for the moment is that maybe some math e=
rror is
happening when it's calculating the width/height of the text? I'll=
have
to decypher that function maybe and step through all of it, I dunno. =

Such a pain.

I did have a thought as I was typing this, that maybe the Ma=
ilstation
was reading the values of the LCD and ORing the text onto what's =
already
there. But I'm not seeing any LCD read notices, not to mention I a=
lso
remember that it uses an LCD buffer in ram, which is where it likely
wo=
uld do such comparisons anyway.

r now. I just really want
need to set some breakpoints, or at least "flagpoints".
st compile some in to the code, but you
hes, or a config file.
prehensible
=
ur "watch list", in the order it gets to them.

Breakpoints are a good id=
ea, and I plan to add something like that in.
But having a full log of eve=
rything has actually been infintely helpful
in tracing down problems, parti=
cularly with how buggy this Z80 emulation
library was when I first got it. =
=A0It was returning the opposite of
certain CPU flags, doing push/pop wron=
g (SP was handled incorrectly), on
top of several normal opcodes having emu=
lation problems, etc. It took a
while to fix all of that, and being able t=
o search for every instance of
a particular opcode executing after I suspec=
ted a problem with it was
useful in order to see the results.

To the libz8=
0 author's credit though, he said this was a rewrite of a
previous Windows-=
only version, and I guess he just never had reason to
thoroughly put it thr=
ough its paces like the previous one. I wouldn't
still be using it if I di=
dn't think it was well-written, bugs aside. He
uses a regex solution to ge=
nerate the opcode functions before compiling,
which lets you modify multipl=
e similar opcodes in one swoop. Otherwise
you'd be changing hundreds by ha=
nd.

I'm not sure if speed will ever be an issue once I get rid of a lot of=

debugging stuff, but I've been pondering ways I might write my own from
sc=
ratch if the need arises. I don't think it'll be that hard, actually.
Just=
time-consuming.


anojones_lalp@...> wrote:<BR>><BR>> Any app that uses timers also =
implements a response for timer<BR>> events. The splash is a simple ap=
p that just displays<BR>> the splash image, sets timer, and then when it=
gets the timer <BR>> event, it makes a call that changes the current ap=
p to<BR>> either the main menu, or if there is more than one user, <BR>&=
gt; the select user app (or when no user accts are set up yet, <BR>> the=
create user app).<BR><BR>I hadn't realized that the splash was an app too.=
That's good to know.<BR><BR><BR>> <BR>> I think you can wipe the da=
taflash, and it will init it. IIRC, the<BR>> flag that holds the datafl=
ash state is in 2nd to last sector of dataflash <BR>> (about 10 bytes at=
start of sector, and nothing else in<BR>> rest of sector (or not much e=
lse????). Preserve the last sector,<BR>> that is where your serial numb=
er is stored. IIRC, the "flash<BR>> test" that you can run from test mo=
de walks on all but that <BR>> last sector with the "test data". Everyt=
hing but the serial number<BR>> will be re-intiialized after the test. =
If you have any apps in<BR>> the loadable-app space, you can skip wiping=
them, and I think<BR>> they will survive the re-init (I could be wrong.=
The flash test<BR>> does wipe that area of dataflash). It is very pos=
sible that<BR>> zeroing out the first 2 bytes in 2nd to last dataflash s=
ector is<BR>> all you need to do to cause it to be re-init'ed (not sure =
tho).<BR>> <BR><BR>I suppose the serial number isn't really important, s=
ince you still have to have a username/password to log into the email accou=
nt, and I doubt they cared what Mailstation unit you logged into the offici=
al Mailstation email server with. I wonder if the serial is even sent to t=
he server when fetching/sending mails.<BR><BR>Something like Tivo on the ot=
her hand has the serial on an eprom, since that's tied directly to your acc=
ount. Even if you replace the hard drive, it's still going to work with yo=
ur account afterward. Though people have managed to clone those in order t=
o transfer their account to another one when the system board dies.<BR><BR>=
ts) is used<BR>> for a lot of stuff. It gets incremented by 16 by the s=
ame<BR>> int as keyscan. The keyscan int is roughly 60 Hz, or about<BR>=
t; <BR><BR>How did you deduce that the keyboard interrupt was 60hz? I'm cu=
rious, since I've seen you mention that before, but I have some evidence th=
at might prove otherwise. Some of it I came to realize just yesterday, eve=
n.<BR><BR>Before, when I was doing all that work on the Mailstation and hoo=
king the ISR for testing things, I placed a counter variable inside the key=
board loop. All it did was count up. I called it kbdtest. In the time16 =
interrupt, I would copy the value of kbdtest into kbdmax, then reset kbdtes=
t to 0. kbdmax would be displayed on the screen in separate code outside t=
he ISR. Since time16 apparently hit at exactly 1 second intervals (because=
I believe I timed it by hand as such), then kbdmax would be a semi-accurat=
e way of determining the speed of the keyboard interrupt. <BR><BR>As it tu=
rned out, kbdmax was resulting in a constant value of 64. So the keyboard =
interrupt appeared to be happening 64 times a second, 64hz, etc.<BR><BR>And=
now recently, when searching for possible info on the RTC of the Mailstati=
on (or one similar), I came upon something rather interesting. It appears =
that many RTC chips have a programmable square wave generator, in hz. With=
values like 16, 32, 64, 128, 256, etc. This made me think that maybe the =
keyboard interrupt is being generated by programmable RTC.<BR><BR>More poss=
ible proof of this is when I was tinkering with port 0x2F a long time ago. =
This is what I learned back then:<BR><BR> - Setting bits 4,6 makes time16 =
interrupt 2x slower (kbdmax =3D 128)<BR> - Setting bits 5,6 makes time16 in=
terrupt 4x slower (kbdmax =3D 256)<BR> - Setting bits 4,5,6 makes time16 in=
terrupt 8x slower (kbdmax =3D 512)<BR> - When bit 6 is clear, but 4, 5, or =
both are set, time16 interrupt doesn't seem to ever occur.<BR><BR>Well, bac=
k then, I naturally assumed that changing 0x2F was affecting the time16 int=
erval. But now, after learning of these programmable square waves, maybe 0=
x2F is changing the speed of the keyboard interrupt, not the time16 one. I=
t would make sense if so. Let me clarify:<BR><BR>Now remember, kbdtest was=
incrementing in the keyboard interrupt, and kbdmax was saving this value i=
n the time16 interrupt. Original assumption was 0x2F was slowing time16, h=
ence more opportunity for kbdtest to reach a higher value before time16 hit=
and saved the value. Well, what if you turn that around, and assume that =
it's affecting the keyboard interrupt instead of time16, making it happen F=
ASTER, thereby causing kbdtest to count faster, which is then recorded to k=
bdmax at what is likely still the normal 1 second interval of time16.<BR><B=
R>If so, that would mean:<BR> - Setting bits 4,6 makes keyboard interrupt h=
appen 128hz<BR> - Setting bits 5,6 makes keyboard interrupt happen at 256hz=
These values correspond to what many RTC square waves are capable of emitti=
ng (along with the 64hz I've assumed Mailstation normally runs the keyboard=
at). I looked at several RTC chips, and many had this programmability, bu=
t I couldn't ever find one with similar registers as what the Mailstation's=
uses. Particularly, they store the two BCD values for secs/mins/hours/etc=
in a single byte at a particular I/O port, where as the Mailstation seems =
to store each individual BCD digit in two separate ports, based on what's b=
een documented so far.<BR><BR>Anyway, I guess the only way to prove any of =
this is true would be for me to display the value of time16 on the screen c=
onstantly, while changing 0x2F. If 0x2F is in fact affecting the speed of =
the keyboard interrupt, then printing time16 on the screen constantly would=
still show its value updating in 1 second increments no matter what. If m=
y new assumption is wrong, meaning it's affecting time16 instead like I ori=
ginally assumed, then the counter on the screen would happen slower. I'll =
try this sooner or later to see how it goes. <BR><BR>Either way, I just wa=
nted to point out why I think normally the keyboard interrupt happens at 64=
hz instead of 60, but I'm still curious of your reasoning in case I'm still=
missing something.<BR><BR><BR><BR><BR>> <BR>> > happen automatica=
lly, intertwined with port 3's interrupt mask <BR>> > to know whether=
they should be triggered (and handle when an<BR>> > interrupt gets "=
reset" during the interrupt routine). <BR>> <BR>> It's not clear to =
me if the P3-out is a "mask", or if it is just<BR>> used to reset the co=
rresponding bit of the register that feeds<BR>> P3-in (where P3 in bits =
are set by the various int inputs).<BR><BR>I think I may have tested this b=
efore, but I don't remember. Either way, all a person needs to do is hook =
the ISR, and make a value increment in, say, the keyboard interrupt. Then =
change the interrupt mask to disable keyboard interrupts. If the value sto=
ps counting, then you know the mask is in fact disabling that interrupt. W=
henever I get around to writing the code again to check the time16 rate whe=
n changing 0x2F, I'll check this too.<BR><BR><BR><BR><BR>> <BR>> <=
kluge><BR>> You could just disable the keyscan, and replace it with c=
ode<BR>> that reads an unused port, and calls the put_key_in_buffer<BR>&=
gt; routine with that value. The emulation for that new port<BR>> could=
just feed the keycodes for any keys pressed on pc kbd.<BR>> I don't rem=
ember for sure what is stored in that keybuffer, <BR>> though. It might=
be the row/col data, along with up/down/shift<BR>> info.<BR>> That w=
ould make it a bit harder, possibly even worse than actually <BR>> scann=
ing an emulated keyboard. Or maybe skip the keybuffer,<BR>> and just fe=
ed keyevents into the event queue? Pretty sure<BR>> those are ascii cod=
es.<BR>> (I'm just thinkin' out loud, not sure any of this is a good ide=
a.)<BR>> </kluge><BR><BR>It's funny you even mention that, because=
honestly that was my first thought: to just dump keys straight into the bu=
ffer. But this would only work for emulating the Mailstation OS, and I eve=
ntually want all of my custom code to work with it as closely to the real h=
ardware as possible. So I did end up creating a translation table, which w=
asn't as bad as I thought, actually. I used an array of 10 rows/8 columns,=
which stores the PC scancode for each associated key of the Mailstation ke=
y matrix. I have to scan through the array's rows and columns every time a=
key is pressed/released to match it with a scancode in the array (so 80 it=
erations, tops). But once it's found, I can then easily take the row/colum=
n values from the loop to update the actual bitwise matrix array (which is =
just 10 bytes representing the rows, since each column is an individal bit)=
, which is what I use to then actually emulate the output of port 1 (based =
on the input of port 1, 2.0, and 2.1).<BR><BR>Not every MS key is emulated =
yet, and some have been put elsewhere ("Home" is the Home key, though "Back=
" is the End key). I'm going to emulate "Function" as the Control key even=
tually too, but for now control combos are how I send special commands to m=
y emulator, so I'll have to change that.<BR><BR><BR>> <BR>> I think t=
he only one that is important at this point is the <BR>> 60 Hz keyscan-e=
tc. The rtc might be important as far as <BR>> waking up the cpu at the=
set mail-download time, and for the <BR>> date and time to be set right=
when you power up, but as far as emulating, prolly not too important.<BR><=
BR>As I mentioned earlier, I was looking into a lot of different RTC chips,=
and most of them did have an alarm feature. So that might be tied to that=
interrupt strictly to wake it up to check for mail, as you mentioned. Mak=
es a lot of sense.<BR><BR><BR>> <BR>> > So I'm left to assume that=
ard<BR>> > support), then I'll get stuck at the startup somewhere too=
.<BR>> <BR>> When you get the timers working right, I bet it won't ge=
t stuck!<BR><BR>Well that's the thing, I do have timers working. <BR><BR>B=
ut, now that the keyboard is emulated, at a cold boot I can enter configura=
tion info (even though I can't see it as I type it, unless I type so much t=
hat it scrolls off to the right; but I can see password asterisks fine). I=
save, and get to the user selection screen:

(URL)
fybertech.net/mailstation/img/msemu_userselect.png"><BR><BR>and then to the=
main menu:

(URL)
mu_mainmenu.png"><BR><BR><BR><BR>I can even use most of the items in the me=
nu without issue (aside from some missing text at times, and the create new=
mail app crashing). This is on v2.53 firmware btw. <BR><BR>But, when I s=
oft-reset after configuring it, my original assumption was correct: v2.53 s=
ticks at the splash screen, just like v3.03 did with the proper dataflash c=
onfiguration already there. I even tried changing the emulator to always f=
ire keyboard/time16 interrupts regardless of the interrupt mask, and it mak=
es no difference.<BR><BR>I discovered something shortly ago, however. I wa=
s originally assuming that the Mailstation was changing the interrupt mask =
from 0x22 to 0x39. But I added in a feature to the emulator to dump ram wi=
th a keypress. So, I dump ram while the interrupt mask is still 0x22, and =
then again when the mask changes to 0x39. Turns out, it's not changing the=
mask. It's changing EVERYTHING to 0x39. Ram page 1 is totally full of it=
, and page 0 is almost entirely, aside from values which I assume are getti=
ng set during the message queue loop and such when the interrupt hits.<BR><=
BR>And you know what? I bet I just figured out what it is, because there's=
a few 0x39s even in my first ram dump. Right before they start, there's "=
Jan ". I bet it's reading the RTC and I'm returning invalid value(s)!<BR><=
BR>YEP! I just now tried it, returning 0x01 for ports 0x10 through 0x1C, a=
nd now it warm boots just fine! Even the create new mail app works now (si=
nce it was prolly reading the date/time to know what to put in the email).<=
BR><BR>All unhandled IO ports are actually just handled like RAM: stored in=
and returned with an array, which I zero out at startup. So it was return=
ing 0 for all RTC values originally, which obviously was breaking something=
. I think I'm actually going to tie the Mailstation RTC to my PC's clock s=
o that it's always correct, once I figure out how to represent all the valu=
es (and converted to BCD).<BR><BR>So now, aside from things like the modem,=
printer port, etc, it seems everything is working enough for the Mailstati=
on to not complain, aside from the missing text in places.<BR><BR><BR>> =
Maybe a different switch to stop at certain addresses. Then <BR>> you c=
an box in the code where the text is supposed to be copied,<BR>> and eve=
n dump the addresses involved (third switch).<BR><BR>Yesterday, I changed t=
he emulator back to return low battery status, so that I could get that "Th=
e battery power is running low, the system will power off automatically." e=
rror. I figured using this for debugging would be best because it happens =
during the startup, before lots of other junk clutters my debug log. Anyho=
w, I created a ram dump right after the "low battery" message box appeared.=
Turns out, it's really getting the text string it needs to print, because=
it's in two different memory locations, which I've traced back to the code=
writing them there. All I can figure for the moment is that maybe some ma=
th error is happening when it's calculating the width/height of the text? =
I'll have to decypher that function maybe and step through all of it, I dun=
no. Such a pain.<BR><BR>I did have a thought as I was typing this, that ma=
ybe the Mailstation was reading the values of the LCD and ORing the text on=
to what's already there. But I'm not seeing any LCD read notices, not to m=
ention I also remember that it uses an LCD buffer in ram, which is where it=
likely would do such comparisons anyway.<BR><BR><BR>> <BR>> > Any=
way, I guess that's all I got for now. I just really want <BR>> > to=
fix that text rendering problem!<BR>> <BR>> You need to set some bre=
akpoints, or at least "flagpoints".<BR>> I would prolly just compile som=
e in to the code, but you <BR>> could also make commandline switches, or=
a config file.<BR>> <BR>> The idea being to whittle down your log to=
a comprehensible<BR>> size. So, you set some addresses that you are in=
terested to<BR>> know if it is getting to. You can have it just log the=
that in. But having a full log of everything has actually been infintely h=
elpful in tracing down problems, particularly with how buggy this Z80 emula=
tion library was when I first got it. =A0It was returning the opposite of =
certain CPU flags, doing push/pop wrong (SP was handled incorrectly), on to=
p of several normal opcodes having emulation problems, etc. It took a whil=
e to fix all of that, and being able to search for every instance of a part=
icular opcode executing after I suspected a problem with it was useful in o=
rder to see the results.<BR><BR>To the libz80 author's credit though, he sa=
id this was a rewrite of a previous Windows-only version, and I guess he ju=
st never had reason to thoroughly put it through its paces like the previou=
s one. I wouldn't still be using it if I didn't think it was well-written,=
bugs aside. He uses a regex solution to generate the opcode functions bef=
ore compiling, which lets you modify multiple similar opcodes in one swoop.=
Otherwise you'd be changing hundreds by hand.<BR><BR>I'm not sure if spee=
d will ever be an issue once I get rid of a lot of debugging stuff, but I'v=
e been pondering ways I might write my own from scratch if the need arises.=
I don't think it'll be that hard, actually. Just time-consuming.<BR><BR>=



3: Subject: Re: Mailstation Emulation - Episode Four

(top)

From: "FyberOptic" <fyberoptic@...>

Jan 2, 2010


Shew it's late, but I wanted to post a progress report at least.

=
=A0

As you can see, the text is fine now! =A0And ironically, I still ha=
ve
no idea what the problem was. =A0What I did was actually swap out the
Z8=
0 emulation with another library. =A0Took some reintegrating to make
it wor=
k with how this other one was designed to operate, but not too
much work. =
=A0And as soon as I got it to start properly, I immediately
realized it was=
way faster (probably since I compiled it with its
assembly optimizations e=
nabled). =A0And when I got interrupts working,
and I got to the first warni=
ng dialog about creating a new account, I
realized it was fix. =A0It was sh=
owing text. =A0And then the
configuration window was too. =A0Great! =A0

So=
that means there's still opcode(s) which are being handled wrong in
libz80=
, despite all the work I did on it to fix it. And not only was it
not showi=
ng text, btw, but in the Extras menu, the arrow keys were
behaving all wron=
g as well. =A0I don't think I ever mentioned that.
=A0 But oh well. =A0I wa=
s so tired of staring at page after page of
disassembled code and debug out=
put trying to find the error that I
decided trying another library was the =
best way to test where the
problem really was.

On an even brighter side, t=
his new library, z80em, emulates CPU timing.
=A0In fact, it does this so we=
ll, combined with a software interrupt
related to this feature which is tri=
ggered after so many CPU cycles
(which you can specify), that I was able to=
turn this into my main
Mailstation interrupt generator. =A0And with a bit =
more timing code in
place, I now have it emulating a 12mhz Z80, with 1 seco=
nd time16
interrupts, and 64hz keyboard interrupts. =A0The cursor even blin=
ks on
the screen at the same rate as on the real hardware. =A0Awesome!

Aft=
er that, I tied the RTC into my PC's clock, so every time you start
the emu=
lator you get the right time. =A0This means you can't actually
set the RTC =
time via the Mailstation at the moment, though.

Trying to do something wit=
h the modem just freezes it up, as expected.
=A0I want to figure out how be=
tter to emulate that, which I'm sure is
in the datasheet if I still have it=
. =A0Wouldn't it be neat to emulate
a PPP connection with that? =A0But asid=
e from the modem, I haven't had
any problems at all. =A0I've messed with al=
l the apps, saved messages
to my outbox, etc etc. =A0All good.

Anyway, I'v=
e done a ton of work today cleaning up the code. =A0I've
also added in the =
ability to scale the screen 2x, and to even go
full-screen. =A0Seeing the M=
ailstation OS fill my monitor is both odd
and neat!

But yeah, I hope to up=
load a version good enough for you guys to try out
tomorrow sometime. =A0Th=
ere's just still some things I want to add
before I do (like figure out why=
I have to push the power button twice
to turn it off). =A0For now it'll pr=
olly stay locked at 12mhz, and the
interrupt speeds won't be changeable or =
anything, since there's some
experimenting I want to do on the real hardwar=
e first to see if I can
better understand things. =A0And considering how we=
ll the emulation is
going right now, I can prolly test out some code on my =
PC now before
sending my test apps to the Mailstation. =A0Which is what I w=
rote this
for to begin with!


/P>


(URL)
_settings_fixed.png"> =A0 (URL)
img/msemu_mainmenu_fixed.png"></P><P><BR></P><P>As you can see, the text is=
fine now! =A0And ironically, I still have no idea what the problem was. =
=A0What I did was actually swap out the Z80 emulation with another library.=
=A0Took some reintegrating to make it work with how this other one was des=
igned to operate, but not too much work. =A0And as soon as I got it to star=
t properly, I immediately realized it was way faster (probably since I comp=
iled it with its assembly optimizations enabled). =A0And when I got interru=
pts working, and I got to the first warning dialog about creating a new acc=
ount, I realized it was fix. =A0It was showing text. =A0And then the config=
uration window was too. =A0Great! =A0</P><P>So that means there's still opc=
ode(s) which are being handled wrong in libz80, despite all the work I did =
on it to fix it. And not only was it not showing text, btw, but in the Extr=
as menu, the arrow keys were behaving all wrong as well. =A0I don't think I=
ever mentioned that. =A0 But oh well. =A0I was so tired of staring at page=
after page of disassembled code and debug output trying to find the error =
that I decided trying another library was the best way to test where the pr=
oblem really was.</P><P>On an even brighter side, this new library, z80em, =
emulates CPU timing. =A0In fact, it does this so well, combined with a soft=
ware interrupt related to this feature which is triggered after so many CPU=
cycles (which you can specify), that I was able to turn this into my main =
Mailstation interrupt generator. =A0And with a bit more timing code in plac=
e, I now have it emulating a 12mhz Z80, with 1 second time16 interrupts, an=
d 64hz keyboard interrupts. =A0The cursor even blinks on the screen at the =
same rate as on the real hardware. =A0Awesome!</P><P>After that, I tied the=
RTC into my PC's clock, so every time you start the emulator you get the r=
ight time. =A0This means you can't actually set the RTC time via the Mailst=
ation at the moment, though.</P><P>Trying to do something with the modem ju=
st freezes it up, as expected. =A0I want to figure out how better to emulat=
e that, which I'm sure is in the datasheet if I still have it. =A0Wouldn't =
it be neat to emulate a PPP connection with that? =A0But aside from the mod=
em, I haven't had any problems at all. =A0I've messed with all the apps, sa=
ved messages to my outbox, etc etc. =A0All good.</P><P>Anyway, I've done a =
ton of work today cleaning up the code. =A0I've also added in the ability t=
o scale the screen 2x, and to even go full-screen. =A0Seeing the Mailstatio=
n OS fill my monitor is both odd and neat!</P><P>But yeah, I hope to upload=
a version good enough for you guys to try out tomorrow sometime. =A0There'=
s just still some things I want to add before I do (like figure out why I h=
ave to push the power button twice to turn it off). =A0For now it'll prolly=
stay locked at 12mhz, and the interrupt speeds won't be changeable or anyt=
hing, since there's some experimenting I want to do on the real hardware fi=
rst to see if I can better understand things. =A0And considering how well t=
he emulation is going right now, I can prolly test out some code on my PC n=
ow before sending my test apps to the Mailstation. =A0Which is what I wrote=
this for to begin with!</P><P><BR></P>



4: Subject: Mailstation Emulation v0.1 Release

(top)

From: "FyberOptic" <fyberoptic@...>

Jan 3, 2010

I think I've finally prettied up a version of the emulator well enough to=
release. But until I get a better page for it, here's the directory index=
:

(URL)

The quick start instruc=
tions are: download msemu_v01.zip and codeflash.bin, extract the ZIP, and d=
rop the .BIN into the folder with it. Then run msemu.exe.

Make sure to =
look at the readme.txt for info on the keys! You can switch between 2X siz=
e and even go fullscreen.

In that directory index above, codeflash.bin is =
the same as ms253.bin, which is v2.53 of the Mailstation firmware. ms303a.=
bin is v3.03a, which is slightly different. I didn't include any of these =
in the ZIP because it's probably against copyrights for me to even have the=
m on the website.

The emulator looks for a "codeflash.bin" by default to w=
ork, so you can either rename other firmwares to this, or you can specify a=
n alternate filename on the command line (or just drag the .BIN onto the EX=
E to launch with it). This lets you try out different versions, or even yo=
ur own replacement.

I've included a "dataflash.bin" with some generic sett=
ings, just so that you can go straight to the main menu when you start it. =
If you like, you can delete that file, and it'll generate a fresh one the =
next time you start it.

Note that the intro screen's text colors should be=
yellow, and the default LCD color should be green (check the readme on how=
to change). If they're not for you, let me know!

I'd appreciate feedback=
, particularly on any problems you might find. There's obviously a lot sti=
ll not emulated, but apparently there's plenty to make the OS itself run. =
Just keep in mind that it'll probably freeze up if you try to use the modem=
!


5: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 5, 2010


Hmmmmm... What the heck am I gonna do =
with an exe...

The most recent windows I have is 98, and I have not booted=
that
box up in like a year. I promised myself the next time I boot
it, I=
will back it up. Sooooooooo, I have just been avoiding
booting it. And I=
don't even know if your prog will run
on it.

Then I thought about this th=
in client I have here, with
win XPe on it, running my magicjack. But it do=
es not
have enough space. I guess I could copy the files to a
thumb drive=
.....

Then I wondered if it would run with wine, under Ubuntu.

It does! =
Pretty neat!!!


=
Seems they are reversed with respect to keys in readme.

ks for a "codeflash.bin"

I made a copy of (what I believe is) the 253yr f=
rom yahoo group,
renamed it codeflash.bin.

Seems to work fine, but I get a=
different checksum in emulator
than on an actual 253yr unit I have here. =
Funny thing is, I am
pretty sure I verified Don's dump with an actual 253yr=
several
years ago, and it matched. But it prolly was not this exact
same =
unit. Maybe I changed something in the image I am
working with, and forgot=
??????

I get 91ff on my actual unit, and 9254 with my image file.
What che=
cksum do you get emulating your 253yr image?

's text colors should be yellow,
en (check the readme
know!

Colors are ok, and can change with the ctrl keys. Took me
a bit of =
head scratching to figure out I needed the right-side
ctrl key. You need t=
o make black chars on light-green-tinted
background one of the options! ;=
-)

nd. There's obviously a lot still not emulated,
plenty to make the OS itself run.

Calculator works. Typed a new message=
, saved it in outbox,
and opened it up again. Even goes into test mode, an=
d passes
several tests. The modem test failed, but did not lock it up.
(D=
id not try to send email, though.)

Noticed that it remembered it was in te=
st mode (not sure
if I quit emu, or just "power cycled it).

Could not fini=
sh keyboard test, is there an "@ key", size, or
spell check. Or "get mail"=
button?

I would prefer that "back" was mapped to "esc", I closed
emulato=
r by accident more times than I can count. Esc just
seems more intuitive f=
or back. Maybe just make "power" quit
emu??? Or only quit when in "off" m=
ode???

As for why it needs 2 presses, maybe it has something to do
the fac=
t that the "power" does not really go away after the
first press? There is=
a flip-flop chip on the ms board that
actually kills power.

It would be=
a lot more fun if I could tweak emu code.
For instance, I notice that when=
ever I press a key in
calculator app, emu prints message on text console
"=
dataflash write". It would be fun if it said what
address was written.
=

I also think it would be fun to compile a Linux version.

I don't know if =
it is emulator, or wine, but the combo
is sucking up over 50% of dual 2.5 G=
Hz AMD cpu.
OK, top sez emu itself is taking over 40% of one cpu,
and Xor=
g is taking over 35%, and wine about 20%.

Also, I don't know just which pr=
ogram crashed, but it
seems like it was when I was trying to switch out of
=
"full screen" mode. Took out the X server, and any
program that was runnin=
g under X. Linux text consoles
were still there, along with a great deal o=
f programs
just listed with "2009" as the start time in process list.

Fir=
st time this box ever crashed in the ~1 year since I
put it together. I re=
booted the whole thing, but maybe
I could have restarted X. Seemed like a=
good time for a
reboot, though! (I don't think I have rebooted more that
=
5 times since built, and it is up 24-7). Took over half
hour to check the =
disk, I don't really want to do a lot
of testing as to just what makes it c=
rash. I think I
will avoid the full screen mode, and see if that helps.

C=
J


6: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "FyberOptic" <fyberoptic@...>

Jan 5, 2010

alp@...> wrote:
u.

Yeah I had it working fine in Wine on Deb=
ian when I tried it, and figured any Linux folks could just do that to try =
it out.



As soon as you =
said that, I remembered that I forgot to update the readme when I decided t=
o swap those keys around.

checksum in emulator
ng is, I am
l
t. Maybe I changed something in the image I am
?????
t checksum do you get emulating your 253yr image?

I only have one Mailstat=
ion, the demo unit which runs v3.03a (mail servers can even be set in the c=
onfiguration). For that version, the hardware gives me a checksum of 0x53d=
4, but the emulator is showing 0x53e5 when I run the same firmware.

I al=
so noticed that my hardware seems to freeze up after getting to that point,=
where as the emulator continues on to a battery test.

No idea what's goin=
g on with either of these things yet. I even tried removing my bounds-limi=
ting for the codeflash (it forces an address wrap at 1MB like I'm assuming =
the real hardware does on pages 64 and up), just in case, and it gave the s=
ame results.

I'd have to dig through the ROM Test code to see if the codef=
lash is the only thing it's testing, or if there's something else being add=
ed into the result somehow.

an "@ key", size, or

None of those =
buttons are assigned yet, since I wasn't sure what to assign them to. I di=
dn't need'em for the testing I was doing at the time, either. But you shou=
ld be able to assign these to other things now, which I'll get into in a bi=
t.

I'll probably want to assign "Get Mail" soon though, since I've been wo=
rking on trying to emulate the modem chip, and at the moment I have to keep=
going into the outbox to trigger the modem.

ck" was mapped to "esc", I closed
can count. Esc just
wer" quit

I did consider chan=
ging it to escape a few times, but as I was testing, I found hitting escape=
right quick to get back out of it was easier for the time being. I'll cha=
nge exit to right-control + Q or X or something eventually I guess.

The re=
ason "power" doesn't exit is because I want to be able to simulate powering=
on and off.

o do
press? There is a flip-flop chip on the ms board that
er.

Na, it's taking the two presses to finally acknowledge it, which the=
n runs the shutdown function in the firmware, which toggles the bit in port=
0x28 for that flip-flop. I'm actually emulating this bit, and "powering o=
ff" when it's changed.

It seems that the Mailstation waits for the state o=
f the power button to change somehow before acknowledging it again as being=
pressed. Sometimes, for example, if you hold F12 while the system is boot=
ing, then let it go once you're at the menu, then you only have to press it=
once to power off. I tried various ways to replicate this behavior automa=
tically, but I didn't have much luck yet. I'm thinking it might have somet=
hing to do with the signal bouncing of the real hardware, since the power b=
utton isn't handled for bouncing like the rest of the keyboard keys are in =
the keyboard routine.

Hitting the button twice is a minor inconvenience =
though so I've worked on other stuff for the time being instead.

Somethin=
g interesting of note is that when you power off and power back on, normall=
y the hardware would retain the RAM contents to my knowledge. Well when I =
was retaining their contents, the MS would check some ports during startup,=
before anything was even on the screen (or even before the screen was on?)=
, and then shut itself back down again. Every time you'd try to power it b=
ack on this would happen. The only solution for the time being is clearing=
the ram contents at any power-off until I figure out what the Mailstation =
is doing.

r instance, I notice that whenever I press a key in
rints message on text console
said what

The text you see in the console is jus=
t very basic output. The "dataflash write" message was actually indicating=
that the emulator was writing the dataflash contents out to the file, not =
that the Mailstation was currently modifying the contents at that exact mom=
ent (though pretty close to it). It doesn't write out the file for every i=
ndividual modification, for performance reasons.

However, there are debug =
messages for that, which will not only tell you the current PC when the wri=
te is occuring, but also the dataflash address and value being written. Sa=
me for sector erases. The debug messages won't work in the version you're =
using since I removed the ability when cleaning up code before. But in v0.=
1a, you can put /console and/or /debug on the command line. The former spi=
ts all IO and other activity to the console, the latter spits it out to a "=
debug.out" file.

Ever since I changed CPU emulation libraries, the debug o=
utput has dramatically reduced, since I'm no longer dumping constant disass=
embly as well. But the tons of IO port requests are still a mess! Eventua=
lly I'll let one limit which ports they're interested in.

nk it would be fun to compile a Linux version.

As it just so happens, v0.1=
a not only includes the source, but will compile under Linux.

As for chang=
ing the Mailstation keys as I mentioned earlier, there's an array which hol=
ds all the mappings, but you'll probably need that "mailstation_keyboard.ht=
ml" file I got somewhere before to know which key is what. I'm betting you=
have it though!

it
ode. Took out the X server, and any
inux text consoles


I didn't =
have any crashes under Debian when using it with Wine. Full-screen mode se=
mi-worked, but didn't actually go full screen. It just kind of replaced mo=
st of the desktop (but stayed crammed underneath the menu bar). That's abo=
ut what I expected, even though that sucks.

Ironically, it wasn't until I =
compiled a native Linux binary that I had the full-screen mode crash the ap=
plication when switching back and forth several times. Even full-screen un=
der a native binary still didn't work right, though.

But to be blunt, this=
is Linux after all, and it's notorious for being problematic at running th=
ings full-screen. I've had a lot of trouble with other applications runnin=
g that way in the past. So your advice of "only in a window" seems the bes=
t route, since there's really nothing I can do about it. Going full-screen=
is all handled through SDL calls.

That said, console output is faster und=
er Linux than in Windows! I found that a little surprising.

Anyway, you=
can grab the newer version here:

(URL)
lator/msemu_v01a.zip

If you have any trouble building it, you can check th=
e build.txt, or give me a holler and I'll try to help.


7: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 7, 2010

of the power
being
m is
ave to press it once to power off. I tried various ways to
s behavior automatically, but I didn't have much
it might have something to do with
ware, since the power
the
n twice is a minor inconvenience though so
r the time being instead.

Try inverting the sense of power-button input bi=
t.

Instead of:
case 0x09:
return (byte)0xE0 | ((power_button & 1) << );=


8: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 7, 2010

he state of the power
again as being
hile the system is
en you only
to
uck yet. I'm thinking it might have something to do with
ouncing of the real hardware, since the power
bouncing like the rest of the
ne.

ing the sense of power-button input bit.
return (byte)0xE0 | ((power_button & 1) << );

try:
case 0x09:
return (=
byte)0xE0 | ((~power_button & 1) << );

(I was trying to enter a tab in pr=
evious post, and all of a
sudden it said "message sent" or something to tha=
t effect.
I think the tab moved the focus from the message box over to
the =
"send" button.)

I have not tried to compile it myself just yet, been looki=
ng
over docs for sdl.

CJ


9: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 7, 2010

) << );
1) << );

I'm sure fyberoptic knows what I meant, but if (by any stretch)
=
there is anyone else reading this, the "4" got clipped.
It wrapped to next =
line, and when I edited to fit on one line,
I musta deleted it.

So, this=
is what I should have typed:
case 0x09:
return (byte)0xE0 | ((~power_bu=
tton & 1) << 4);

I think it is too early for my brain.

CJ


10: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "FyberOptic" <fyberoptic@...>

Jan 7, 2010

...> wrote:
stead of:

Doh, I inverted the main keyboard keys, but never thought to do it for th=
e power button. Goes off with one tap now! Nice find!

d to compile it myself just yet, been looking
over docs for sdl.

If you're=
using any Debian-based distro (you said you're in Ubuntu, so you are), the=
n you should be able to grab the development packages of SDL and SDL_gfx th=
rough APT. I'm not for sure what repository it came from (hopefully one wh=
ich is enabled by default), but just search for the package "libsdl-gfx1.2-=
dev". When you install it, it should automatically pull "libsdl1.2-dev" to=
o. It did for me under Debian. Saved me the trouble of fetching/compiling=
them manually. Only thing you'll still have to compile separately is Z80e=
m, which is a simple "make" job, pretty much.


11: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 9, 2010

er thought
=

OK, I got all the pieces installed the other day, and got it =
compiling
and running here, too! :-) :-) :-) :-) :-) :-) :-) :-)

I=
had to change one line in Makefile to get it to fly with 64 bit cpu:

I ch=
anged
"objcopy -I binary -O elf32-i386 --binary-architecture i386 rawcga.b=
in rawcga.o"

to
"objcopy -I binary -O elf64-x86-64 --binary-architecture i=
386 rawcga.bin rawcga.o"

because the linker refused to link the 32 bit fon=
t file with the 64 bit
emulator object file. It was pretty easy to figure =
out the "-O elf64-x86-64",
but it took a lot of reading to find out that yo=
u needed to use
"--binary-architecture i386" for either 32 or 64 bit. Go =
figger.

ntu, so you are),
es of SDL and SDL_gfx through
came from (hopefully one which is
or the package "libsdl-gfx1.2-dev".
matically pull "libsdl1.2-dev" too.
me the trouble of fetching/compiling
till have to compile separately is Z80em,
pretty much.

Well, that was really good to know! I didn't even think to c=
heck if it
was in repo. Turns out libsdl1.2 was already installed, possibl=
y 'coz xmame
required it. I just checked off the boxes (in synaptic) for t=
he dev files,
and the libsdl-gfx1.2-dev, and "applied" it.

I made the cha=
nge to cflags you suggested for z80em, and did "make all". I
got a boatloa=
d of "type mismatch" warnings, but it still works.

I mentioned that the wi=
ndows version was sucking up close to 100% of cpu,
spread across 3 processe=
s (msemu, wine, and I think xorg). This native
Linux build is still suckin=
g up 100%, but just in the one msemu process!

I don't think it would stop =
anything else from running, it's prolly just
using it 'coz it is available.=
But it sure is making the cpu hotter than
normal!!! It usually reads b=
elow 90 degrees F, but with cpu at 100% it
was running over 110 F!!!!! I e=
ven think I could smell the difference,
but that may just have been my imag=
ination. :-)

So, I looked at source, to see if I could see anything to op=
timize.
The first thing I tried was moving the call to system time, so it o=
nly
happens when one of the time ports is read. Didn't make a noticable
di=
fference, though.

Next, I looked at the main loop. Seems that's the sourc=
e of the
infinite appetite for cpu cycles. The cpu just keeps running that=
loop,
as fast as it's little pins can carry it. :-) :-)

I made an asump=
tion that the main loop was cyling much faster than
necessary, so I added a=
"sleep(1)" to the loop. Well, turns out
sleep's units are "seconds", so =
I guess you know that did not come
out too good. So I tried sleeps little =
brother, "usleep", which
sleeps microseconds. Works like a charm!!!

I tri=
ed usleep(1) through usleep(1000), with only barely perceptible
lag noticab=
le with 1000 usec. I don't know if perhaps it is getting
woke up before th=
e 1000 usec, by some other event/interrupt. But
I am currently settled in =
on 100 usec, because there is a "diminishing
return" effect on the cpu load=
reduction.

With usleep(100), it idles at about 4 or 5 percent, and spik=
es
higher when mailstation code is actually doing something. If
I lean on=
the right-arrow key while in main menu, the ms icon
highlighting cycles re=
peatedly across the screen, and cpu
usage goes to about 10 to 15 percent.

=
If I am understanding how the code works, it seems that the z80
emulation i=
s being called every 16 milliseconds. Is that right?
(deleted rambling)
At=
this point, I did a "sleep(30000)" on the wetware processor.

Oh, I think =
I get it, z80_Execute() runs Z80_IPeriod =3D 187500
T states each call. Th=
at makes more sense now... I was
wondering why it still worked with such la=
rge sleep times!
(Amazing how a little sleep can make things clearer!)

=
On another front, I did quite a bit of fiddling with the screen
color. Fir=
st, I "inverted" the colors, making the background
the bright pixels, and t=
ext the darker. Then I made the
green (now a green background) a very ligh=
t green tint, a
quite passable imitation of the actual LCD.
That took a few=
minutes.

Then I spent a few more hours tweaking the colors! :-)

I made =
all 5 of your color modes into various off-white tinted
backgrounds with b=
lack foreground,
and added a sixth choice, with bluish
foreground, and sam=
e green tinted background (ala earthlink
version of 120 & 150).

It's reall=
y kind of interesting how fast your eyes normalize
any of the tints to seem=
"plain white".

CJ


12: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "FyberOptic" <fyberoptic@...>

Jan 10, 2010

...> wrote:
64 bit cpu:
hitecture i386 rawcga.bin rawcga.o"
86-64 --binary-architecture i386 rawcga.bin rawcga.o"

Ah okay, never ev=
en thought of that being a possible problem. I don't have a 64-bit CPU, my=
self. I figure the simplest solution for future versions, now that I know =
that I converted the font data properly, is to just encode it into a C head=
er file and let it compile with the source.

The reason I included my own f=
ont to begin with is so that it would look the same regardless of platform.=
And for the record, this is the same font style that I use in my FyOS sof=
tware on the Mailstation. It's the classic 8x8 font that CGA video cards u=
sed to use. I'm partial to it both for nostalgia's sake as well as the fac=
t that it's very divisible into most screen sizes. The Mailstation gets a =
40x16 text display out of it, similar to many old computers.

the change to cflags you suggested for z80em, and did "make all". I
a boatload of "type mismatch" warnings, but it still works.

Yeah I got th=
ose too, but it's no problem. The source was likely written under an earli=
er version of GCC.

up close to 100% of cpu,
hink xorg). This native
n the one msemu process!
*snip*
4 or 5 percent, and spikes
ng something. If
s icon
goes to about 10 to 15 percent.

I never noticed it hindering my machine as=
I worked so I never even thought to check. Yet I've had to use usleep in =
daemons before so you'd think I would remember how important some CPU idle =
time in there can be!

The easiest cross-platform fix is:

#ifdef WIN32
S=
leep(1);
#else
usleep(1000);
#endif

Windows doesn't have less than 1 mil=
lisecond sleep unless you get into high-definition timers, and that's a bit=
overkill. From my momentary tinkering I didn't notice any real difference=
in performance by having a whole millisecond delay.

et it, z80_Execute() runs Z80_IPeriod =3D 187500
t makes more sense now... I was
arge sleep times!

I just did some quick math for the number. The Mailstation OS always ru=
ns at 12mhz, so 12000000 / 64 =3D 187500. 64 being the frequency of the ke=
yboard interrupt I determined before. When all the specified CPU cycles ar=
e used, the Z80_Interrupt() function is called. This function automaticall=
y fires the Mailstation keyboard interrupt (if it's enabled) 64 times a sec=
ond. Also, after 64 counts of this function executing, the Mailstation tim=
e16 interrupt gets fired.

Whenever I get around to implementing support fo=
r various CPU and (presumably) RTC timer speeds, these values will be dynam=
ic rather than hard-coded like they are now. I'd rather know more about th=
e I/O port functionality for setting these speeds beforehand, but I haven't=
gotten around to tinkering on the hardware again yet either.

e all 5 of your color modes into various off-white tinted
th black foreground,
and same green tinted background (ala earthlink

I'd be curious to see your color schemes, if you want to take screencaps=
or whatever. I've never even seen the screen to any other model than the =
one I have.

One of the next features I want to implement is a configurati=
on file, where people can just setup the keyboard/colors/etc from there ins=
tead of needing to recompile it.


13: Subject: Re: Mailstation Emulation v0.1 Release

(top)

From: "cyranojones_lalp" <cyranojones_lalp@...>

Jan 12, 2010


(Reply is below the screenshots)

The pix are 1024 x 768, click for full si=
ze, or "view image" if clicking
doesn't work.

This is the green tinted ba=
ckground.
The ide shows some of the code mods.
(By the way, the ide is "Gea=
ny" and it is in Ubuntu repo.)

ation/files/Screenshot-39.png>

This is white background, with black text.=

I don't really like the greenish-tint, even though the mailstation
actuall=
y is greenish. I re-arranged the sdl-event handling with nested
switches.=


All 6 colors running at same time!
The backgrounds are much brighter tha=
n the actual mailstation LCD, but I
don't think I
would want to make them m=
uch darker. I don't really care for the red or
green tints, but
yellow and=
bluish are ok. The "new" 120/150 LCD is the greenish one
below the white.=


One other code change not shown above, to writeLCD function:

lcd_data8=
[n + (x * 8) + (lcdaddr * 320)] =3D ((val >> n) & 1 ?
LCD_fg_color : LCD_bg=
_color);

When I was figuring out how it worked, I changed some of
the para=
m names in that function to these:

writeLCD(ushort lcdaddr, byte val, int =
lcdhalf)

But the only change to logic was to split the color var into two =
(fg &
bg)


never even thought of that being a possible problem.
it CPU, myself. I figure the simplest solution
that I know that I converted the font data
into a C header file and let it compile

Oh, yeah, that=
would be better than fiddling with makefile. I was
thinking of adding opt=
ion to makefile, but that would still need
to be edited to pick version. J=
ust compiling it in would avoid the
config hassle. I actually did somethin=
g similar with your cga
font, I edited it into "cgafont.s" (the db's from y=
our cgafont.inc,
with a label at the head) to allow sdcc code to link with =
it:

.module cgafont

.area _CODE

_cgafont_data::
.db #0x00=
, #0x7e, #0x7e, #0x36, #0x08, #0x1c, #0x08, #0x00
etc. etc.

for the record, this is the same
re on the Mailstation.

I already guessed that. :-)

it idles at about 4 or 5 percent,

ne as I worked so I never even

I didn't notice any slu=
ggish behavior, but I have the "system monitor"
added to gnome desktop's to=
olbar, so whenever that drops down, it's
right there. CPU temps and system=
temps, too. (see screenshot
with 6 mailstation emulators running.)

I've had to use usleep in daemons before so
how important some CPU idle time in
platform fix is:
eep(1000);

Looks good! (So sleep is in ms on win32? I think it =
is in sec on
Linux.)

unless you get into
rom my momentary
ance by having

1 ms seems fine to me. It's n=
ot till you get up over 20 ms that it
really
starts to get bad. Actually, =
right before 16 ms, the delay goes back
to un-noticable. Seems the emulati=
on of the "slice" is happening in
less than a millisecond, so most of the 1=
6 ms is just waiting.

The delay peaks around usleep(15300) or so, and at 1=
5400, it drops
back to un-noticable. (This is on dual 2.5 GHz AMD processo=
r).
For a default, 1 ms seems good for just about any cpu speed.
Maybe you =
can make it a runtime config option?

I used the highly scientific procedur=
e of counting "thousands",
from power-on to splash, and I don't quite get t=
o the "s" in "one
thousand two"
with 1-500 us range. Seems I can get to "t=
hous" at 1000 us, and
"thousan"
at 5000 us. At 10,000 us I can just about =
get the whole "one thousand
two"
out. On a real mailstation I get the same=
as the 500us and lower,

IPeriod =3D 187500
was
zing how a little sleep can make things clearer!)
math for the number. The Mailstation OS always
00 / 64 =3D 187500. 64 being the frequency of
the
etermined before. When all the specified CPU
cycles
terrupt() function is called. This function
automatically
station keyboard interrupt (if it's enabled) 64 times a
second.
er 64 counts of this function executing, the Mailstation
time16
gets fired.

Are we in agreement that the emulator runs at "12 MHz" only b=
ecause
you call it every 16 ms, and it runs Z80_IPeriod =3D 187500 T-states=

every time it is called?

Just for kicks, I just now changed it to call z8=
0_execute every time
thru the main loop, with no usleep, Now the mailstati=
on code is
running at warp 11!!! I'm not sure, but I think it is better th=
an
16 x 12MHz =3D 192 MHz!!! And that would be if it was taking a full
mil=
lisec each call, and I think it is closer to half millisec,
which would mea=
n close to 400 MHz. Wheeeeeeeeeeeeee!!!!!!

lementing support for various CPU and
e values will be dynamic rather

Not s=
ure what you mean here??? You mean the PC's cpu, right???
But what RTC????=
? Oh, you mean setting the mailstation to diff
cpu speeds, right? And lik=
ewise with mailstation rtc. You
want it to adjust emulatin speed based on =
port 0d & 2f.

r setting these speeds beforehand, but I
ng on the hardware again yet either.

All I know are the 8/10/12 MHz speeds=
. There might be more.

I was wondering if you ever tested the various int=
errupts, to
figure out if they were INT's or NMI's?

ur color modes into various off-white tinted
eground,
green tinted background (ala earthlink
I'd be curious to see your color schemes, if you want to take
screencaps
or whatever. I've never even seen the screen to any other model than
the
one I have.

I uploaded some screenshots to the root level of group site. =
I
am gonna try to embed them at top of this post, but if it
doesn't work, =
you can see them there.

a configuration file,
from there instead

Config file would be gre=
at!

I was thinking that rather than having several canned colors,
it would=
be easier to tweak if you could adjust the rgb values
of current color. U=
se ctrl-1, ctrl-2, & ctrl-3 for inc red,
inc green, & inc blue. Use ctrl-s=
h-1 (2, & 3) for decrement.
And then save the one you like in the config fi=
le. Or better,
ctrl 1, 2, 3 for inc, and ctrl q, w, e for dec.

CJ


(Reply is below the screenshots)<br><br>The pix are 1024 x 768, click fo=
r full size, or "view image" if clicking doesn't work.<br><br><br><center>T=
his is the green tinted background.  <br>The ide shows some of the cod=
e mods.<br>(By the way, the ide is "Geany" and it is in Ubuntu repo.)<br><a=
href=3D"(URL)
.png">(URL)
enshot-39.png" height=3D"384" width=3D"512"></a><br><br><br>This is white b=
ackground, with black text.<br>I don't really like the greenish-tint, even =
though the mailstation<br>actually is greenish.   I re-arranged the sd=
l-event handling with nested switches.
(URL)
oo.com/group/mailstation/files/Screenshot-40.png">(URL)
roups.yahoo.com/group/mailstation/files/Screenshot-40.png" height=3D"384" w=
idth=3D"512"></a><br><br><br>All 6 colors running at same time!  <br>T=
he backgrounds are much brighter than the actual mailstation LCD, but I don=
't think I <br>would want to make them much darker.  I don't really ca=
re for the red or green tints, but <br>yellow and bluish are ok.  The =
"new" 120/150 LCD is the greenish one below the white.<br><a href=3D"http:/=
/tech.groups.yahoo.com/group/mailstation/files/Screenshot-41.png"><img src=
=3D"(URL)
height=3D"384" width=3D"512"></a><br><br><br></center><br>One other code c=
hange not shown above, to writeLCD function:<br><br>lcd_data8[n + (x * 8) +=
(lcdaddr * 320)] =3D ((val >> n) & 1 ? LCD_fg_color : LCD_bg_col=
or);<br><br>When I was figuring out how it worked, I changed some of <br>th=
e param names in that function to these:<br><br>writeLCD(ushort lcdaddr, by=
te val, int lcdhalf)<br><br>But the only change to logic was to split the c=
olor var into two (fg & bg)<br><br><more comments inline below><b=
r><br>--- FyberOptic wrote:<br>><br>> Ah okay, never even thought of =
that being a possible problem.  <br>> I don't have a 64-bit CPU, my=
self.  I figure the simplest solution <br>> for future versions, no=
w that I know that I converted the font data <br>> properly, is to just =
encode it into a C header file and let it compile <br>> with the source.=
was<br>thinking of adding option to makefile, but that would still need<br=
r>config hassle.  I actually did something similar with your cga<br>fo=
nt, I edited it into "cgafont.s" (the db's from your cgafont.inc, <br>with =
a label at the head) to allow sdcc code to link with it:<br><br>  =
;  .module cgafont
bsp;<br>_cgafont_data::
36, #0x08, #0x1c, #0x08, #0x00
p; etc. etc.<br> <br>> And for the record, this is the same <br>>=
; font style that I use in my FyOS software on the Mailstation.  <br><=
br>I already guessed that.  :-)<br><br>> > With usleep(100), it =
idles at about 4 or 5 percent, <br><br>> I never noticed it hindering my=
machine as I worked so I never even <br>> thought to check.  <br><=
br>I didn't notice any sluggish behavior, but I have the "system monitor"<b=
r>added to gnome desktop's toolbar, so whenever that drops down, it's<br>ri=
ght there.  CPU temps and system temps, too.  (see screenshot<br>=
with 6 mailstation emulators running.)<br><br>> Yet I've had to use usle=
ep in daemons before so <br>> you'd think I would remember how important=
some CPU idle time in <br>> there can be!<br>> <br>> The easiest =
cross-platform fix is:<br>> <br>> #ifdef WIN32<br>>   &n=
bsp;     Sleep(1);<br>> #else<br>>     =
    usleep(1000);<br>> #endif<br><br>Looks good!  (S=
o sleep is in ms on win32?  I think it is in sec on Linux.)<br> <=
br>> Windows doesn't have less than 1 millisecond sleep unless you get i=
nto <br>> high-definition timers, and that's a bit overkill.  From =
my momentary <br>> tinkering I didn't notice any real difference in perf=
ormance by having <br>> a whole millisecond delay.<br><br>1 ms seems fin=
e to me.  It's not till you get up over 20 ms that it really<br>starts=
to get bad.  Actually, right before 16 ms, the delay goes back<br>to =
un-noticable.  Seems the emulation of the "slice" is happening in<br>l=
ess than a millisecond, so most of the 16 ms is just waiting.<br><br>The de=
lay peaks around usleep(15300) or so, and at 15400, it drops<br>back to un-=
noticable.  (This is on dual 2.5 GHz AMD processor).<br>For a default,=
1 ms seems good for just about any cpu speed.<br>Maybe you can make it a r=
untime config option?<br><br>I used the highly scientific procedure of coun=
ting "thousands",<br>from power-on to splash, and I don't quite get to the =
"s" in "one thousand two" <br>with 1-500 us range.  Seems I can get to=
"thous" at 1000 us, and "thousan" <br>at 5000 us.  At 10,000 us I can=
just about get the whole "one thousand two"<br>out.  On a real mailst=
ation I get the same as the 500us and lower,<br><br>> > Oh, I think I=
get it, z80_Execute() runs Z80_IPeriod =3D 187500<br>> > T states ea=
ch call.  That makes more sense now... I was<br>> > wondering wh=
y it still worked with such large sleep times!<br>> > (Amazing how a =
little sleep can make things clearer!)   <br>> <br>> I just=
did some quick math for the number.  The Mailstation OS always <br>&g=
t; runs at 12mhz, so 12000000 / 64 =3D 187500.  64 being the frequency=
of the <br>> keyboard interrupt I determined before.  When all the=
specified CPU cycles <br>> are used, the Z80_Interrupt() function is ca=
lled.  This function automatically <br>> fires the Mailstation keyb=
oard interrupt (if it's enabled) 64 times a second.  <br>> Also, af=
ter 64 counts of this function executing, the Mailstation time16 <br>> i=
nterrupt gets fired.<br><br>Are we in agreement that the emulator runs at "=
12 MHz" only because<br>you call it every 16 ms, and it runs Z80_IPeriod =
=3D 187500 T-states<br>every time it is called?<br><br>Just for kicks, I ju=
st now changed it to call z80_execute every time<br>thru the main loop, wit=
h no usleep,  Now the mailstation code is<br>running at warp 11!!!&nbs=
p; I'm not sure, but I think it is better than<br>16 x 12MHz =3D 192 MHz!!!=
  And that would be if it was taking a full<br>millisec each call, and=
I think it is closer to half millisec,<br>which would mean close to 400 MH=
z.  Wheeeeeeeeeeeeee!!!!!!<br><br>> Whenever I get around to implem=
enting support for various CPU and <br>> (presumably) RTC timer speeds, =
these values will be dynamic rather <br>> than hard-coded like they are =
now.  <br><br>Not sure what you mean here???  You mean the PC's c=
pu, right???<br>But what RTC?????  Oh, you mean setting the mailstatio=
n to diff<br>cpu speeds, right?  And likewise with mailstation rtc.&nb=
sp; You<br>want it to adjust emulatin speed based on port 0d & 2f.<br><=
br>> I'd rather know more about the <br>> I/O port functionality for =
setting these speeds beforehand, but I <br>> haven't gotten around to ti=
nkering on the hardware again yet either.<br><br>All I know are the 8/10/12=
MHz speeds.  There might be more.<br><br>I was wondering if you ever =
tested the various interrupts, to<br>figure out if they were INT's or NMI's=
?<br> <br>> > I made all 5 of your color modes into various off-=
white tinted <br>> > backgrounds with black foreground, <br>> >=
and added a sixth choice, with bluish<br>> > foreground, and same gr=
een tinted background (ala earthlink<br>> > version of 120 & 150)=
.<br>> > <br>> <br>> I'd be curious to see your color schemes, =
if you want to take screencaps <br>> or whatever.  I've never even =
seen the screen to any other model than the <br>> one I have.<br><br>I u=
ploaded some screenshots to the root level of group site.  I<br>am gon=
na try to embed them at top of this post, but if it<br>doesn't work, you ca=
n see them there.  <br><br>> One of the next features I want to imp=
lement is a configuration file, <br>> where people can just setup the ke=
yboard/colors/etc from there instead <br>> of needing to recompile it.<b=
r><br>Config file would be great!<br><br>I was thinking that rather than ha=
ving several canned colors,<br>it would be easier to tweak if you could adj=
ust the rgb values<br>of current color.  Use ctrl-1, ctrl-2, & ctr=
l-3 for inc red, <br>inc green, & inc blue.  Use ctrl-sh-1 (2, &am=
p; 3) for decrement.<br>And then save the one you like in the config file.&=
nbsp; Or better,<br>ctrl 1, 2, 3 for inc, and ctrl q, w, e for dec.<br><br>=
CJ<br><br><br><br><br>