In this thread, we post out shitty code!
I'll start: http://humblefool.net/stuff/php/index.html
some perl i've written...
$a=$_=pop;s^.^$'H.^;y,:-@[-` --{-,A-Ga-f.,;crypt($a,$_)=~/.../;die$'
and then there's this beautiful is_prime() function:
sub is_prime{map{$n=$n%$_?$n:0}2..($n=pop)**.5;$n>1&&1}
thread over
sub is_prime{map{$n=$n%$_?$n:0}2..($n=pop)**.5;$n>1}
i have no idea why that &&1 was in there... that's what happens when i write code after being awake for over 70 hours...
sub is_prime{('x'x shift)!~/^(..+)\1+$/}
I'll grant that yours is superior, because it contains ^/^
, which is kind of making me giggle.
Does it have to be shitty? Will I be castigated if I post something good here?
I'm not sure I would go as far as calling it shitty, but some of the code in my sound processing/effects package is a little messy:
http://oceanbase.org/data/files/synth-20061128.tar.gz
Here's what I wrote about it: "Synth (November 28, 2006) is a collection of programs for generating and changing audio, meant to be connected with pipelines. Nothing too fancy, but a great swiss army knife for sound, since you can pipe its output right into a sound card. Includes gain, panning, delay, limiter based on soft saturation, compressor, a basic reverb, and filters." Also, unlike most of my stuff, I made sure it compiles fine on Windows and Linux, so try it.
Could you post some representative code?
>>9, okay.
/* unfmt: convert integers on stdin to internal float format */
static int monosrc; /* convert from mono? */
static void conv_u8(void);
static void conv_s8(void);
static void conv_16(void);
static void conv_24(void);
static void conv_32(void);
#define PERCHANNEL(code) (code); if (monosrc) { (code); }
and then...
static void conv_24(void)
{
float f;
double d;
int s;
unsigned char *sr;
int c;
sr = (unsigned char *)&s;
for (;;)
{
#ifndef BIG_ENDIAN
c = getchar();
if (c == EOF)
return;
sr[0] = c;
#endif
c = getchar();
if (c == EOF)
return;
sr[1] = c;
c = getchar();
if (c == EOF)
return;
sr[2] = c;
#ifdef BIG_ENDIAN
c = getchar();
if (c == EOF)
return;
sr[3] = c;
sr[0] = (sr[1] & 0x80) ? 0xff : 0;
#else
sr[3] = (sr[2] & 0x80) ? 0xff : 0;
#endif
d = (double)s;
/* squish range of samples from -8388608 .. 8388607
to -32768 .. 32767 */
d /= 256.0f;
f = (float)d;
PERCHANNEL(fwrite(&f, sizeof f, 1, stdout))
}
}
Come to think of it, that really is pretty messy code. ::rewrites::
The only time you should ever need BIG_ENDIAN defines is when speed is absolutely critical, and you can't use byte-wise operations. I'm pretty sure speed is not absolutely critical here.
Remember:
unsigned char be[4]={...}; // big-endian integer as bytes
int v=(be[0]<<24)|(be[1]<<16)|(be[2]<<8|be[3];
Or rather,
int v=(be[0]<<24)|(be[1]<<16)|(be[2]<<8)|be[3];
In this case, I decided the program would assume the input samples have the same endianness as the CPU. Thus in order to extend 24-bit integers to 32-bit, it needs to know which endianness that is.
Ultimately I might make the data endianness an option and do like you say. Thanks for the comment.
Making 24-bit samples have the same endianness as the CPU is pretty non-sensical. There is nothing to gain from doing so (as there is no native 24-bit integer format in the first place), and only serves to make your code more complex than if you had picked an arbitary endianness and stuck with it.
>>14
I will eventually default to little-endian and make big-endian an option, but FYI, here's why I did it that way. It makes the 16- and 32-bit code simple; written without regard to portability, it already worked this way. Also I noticed while little-endian .wav files are the standard on PCs, (PPC) Mac users tend to use big-endian .aiff files, so it seemed logical enough.