justin = {
main feed
,
music
,
code
= {
cockos
,
reaper
,
wdl
,
ninjam
,
jsfx
,
more
}
,
askjf
,
pubkey
};
Ask Justin Frankel
No reasonable question unanswered since 2009!
Suggested topics: programming, music, sleep, coffee, etc.
Note: please do not ask questions about REAPER features, bugs or scheduling, use the
forums
instead.
Name:
Ask:
Human (enter yes):
[
back to index
] | [
unreplied
] | [
replied
] | [
recent comments
] | [
all
]
Question:
So given compiler technology in 2011, is it not as worthwhile to know the guts of CPUs (cache intricacies) as much as the 90s?
Asked by Will (24.234.128.x) on January 5 2011, 12:06pm
Reply on January 6 2011, 6:49am:
I think it is still worthwhile, but two factors are primarily different (Compiler tech has gotten incrementally better but I wouldn't say it is a primary factor):
I think you're less likely to be writing things that use more than 90% of the CPU power available at once, which makes it harder to test those sort of optimizations -- getting a result of "this took less time than before" vs "this can actually run fast enough to watch" is less satisfying, and more time might be spent making things run in multiple threads for bigger improvements, rather than say, optimization for cache line size, etc.
CPUs are a lot more complex, so it is a lot harder to know their intricacies than it was. Back in the day, we could write assembly code Pentiums with paired instructions for U and V pipelines, we knew approximately how big the caches were and what the cache line size was, and it was a pretty simple target. These days, if you write assembly, you're targeting many different generations of CPUs, which have a wide variety of characteristics, can schedule instructions out of order, etc, so it's really harder to know how a change will affect code, and often hard to test that.
Comment:
Your Name:
-- Site Owner's Name:
(for human-verification) Comment:
[
back to index
] | [
unreplied
] | [
replied
] | [
recent comments
] | [
all
]
Copyright 2025 Justin Frankel
.
|
RSS