← Home ← Back to /g/

Thread 107083562

32 posts 8 images /g/
Anonymous No.107083562 [Report] >>107083635 >>107083935 >>107083961 >>107083984 >>107084136 >>107084166 >>107084200 >>107084338 >>107086661 >>107086793 >>107086889 >>107086976 >>107087645
What Is The Purpose of Floats Anyways?
He's got a point.
Anonymous No.107083635 [Report] >>107086929
>>107083562 (OP)
they encode a very wide range of values with good enough precision for many applications using relatively few bits
Anonymous No.107083935 [Report]
>>107083562 (OP)
>tell me you (only know computer science and) don't know scientific computing without telling me you don't know - the meme
lol
Anonymous No.107083961 [Report] >>107084011 >>107084028 >>107084115 >>107084145 >>107086828
>>107083562 (OP)
Because without floats in 3D rendering you either have PS1-style wobbly polygons (fixed point arithmetic) or have to use some sort of lossless decimal structure (comparatively slow as fuck).
Anonymous No.107083984 [Report]
>>107083562 (OP)
>binary data was not supposed to have decimal parts
someone who can't use "decimal" correctly should not be making a meme in this format
Anonymous No.107084011 [Report]
>>107083961
why 3D specifically? why would rendering in 2D or 4D or 512D be any different?
Anonymous No.107084028 [Report] >>107084064
>>107083961
I'm pretty sure it's the affine transformations and lack of Z-buffer that cause wobbly polygons on the PS1.
As a counter example, Doom I & II use 16.16 fixed point and don't suffer from jitter.
Anonymous No.107084064 [Report] >>107084145
>>107084028
Doom does suffer from jitter due to fixed point arithmetic, although it's usually controlled pretty well.
Anonymous No.107084115 [Report]
>>107083961
Fixed point *is* the solution to the pixel-snapping problem. All modern triangle rasterizers are fixed-point.
Anonymous No.107084136 [Report]
>>107083562 (OP)
I don't know, that seems like something a compiler needs to worry about not me.
Anonymous No.107084145 [Report] >>107084185
>>107084064
>>107083961
which part of the pinhole projection model is it that suffers from fixed point arithmetic, exactly? can you explain?

I work with 3d-to-2d projection all the time and I can't for the life of me see why it would matter.
Anonymous No.107084166 [Report] >>107087699
>>107083562 (OP)
Except computers were invented for floating point calculations.
They have scientific and military uses.

Computers were not invented for spying on the population, which is what IBM used them for with their chars and integers.
Anonymous No.107084185 [Report] >>107084307 >>107084341
>>107084145
https://doomwiki.org/wiki/Wall_wiggle_bug
Anonymous No.107084200 [Report]
>>107083562 (OP)
i don't even understand floating points that well so I certainly am not going to play contrarians
Anonymous No.107084307 [Report]
>>107084185
Doesn't sound like an issue inherent to fixed-point.
It's caused by a polar coordinates creating a singularity (e.g. divide-by-zero), made worse by using a look-up table to calculate the angle which discards the lower 19 bits of the input angle.
The fix linked in the first reference does the exact same math, but the look-up table is replaced with actually calculating tan(), thus preserving all 32 bits of precision.
Anonymous No.107084338 [Report]
>>107083562 (OP)
>He's got a point.
A floating point, that is!
Anonymous No.107084341 [Report]
>>107084185
this doesn't look like something that would happen with modern triangle/quad rendering and tessellation
Anonymous No.107086039 [Report]
Ever heard of normalized coordinates
Anonymous No.107086661 [Report] >>107086705 >>107087904
>>107083562 (OP)
the real question is is there any point to the new meme learning low precision floating point formats like bfloat16, float8, and bfloat8 outside of AI?
they're supposed to be more accurate or something compared to halves?
Anonymous No.107086705 [Report] >>107086778
>>107086661
>are low precision types supposed to be more accurate
based retard. No, it's just memory usage. AI tards cannot into efficient code, so they have to bloat hardware memory instead.
Anonymous No.107086778 [Report]
>>107086705
>reading comprehension
more accurate as compared to the (at least on gpus) very widely supported low precision type, which i mentioned, 16 bit IEEE floats i.e. halves
which happen to be the same size as bfloat16
Anonymous No.107086793 [Report]
>>107083562 (OP)
>loses accuracy at higher numbers
what's the fucking point here exactly? why would you want to break incrementing big numbers?
Anonymous No.107086828 [Report]
>>107083961
usecase for 3d?
Anonymous No.107086889 [Report]
>>107083562 (OP)
OP is retarded.
Anonymous No.107086924 [Report]
Just a reminder that Chuck Moore made processors (greenarrays) with his own CAD software (okad) entirely with his own forth and he used no float point, only fixed point arithmetic.
scabPICKER No.107086929 [Report]
>>107083635
I good enoughed your mom

I'm sure you won't mind, enough.
Anonymous No.107086943 [Report]
>ctrl + f
>posits
>0 results
Anonymous No.107086976 [Report]
>>107083562 (OP)
Honestly floats have no place outside graphics and simulation nowadays.
Anonymous No.107087645 [Report] >>107089268
>>107083562 (OP)
fixed point arithmetic doesn't solve the 1/10+2/10 problem btw
Anonymous No.107087699 [Report]
>>107084166
>Computers were not invented for spying on the population
heh
Anonymous No.107087904 [Report]
>>107086661
bfloat16 is literally just the upper half of a single precision float, so the only advantage is half the storage space at a cost of 70% of the precision.
It's still accurate to ~2 decimal digits (enough for percentages), and it's trivial to convert back to a single precision float, so it covers a decent number of use cases.

fp8 and fp4 are pretty much just for ML, yeah. They have shit accuracy and range, which only really works well when most of the numbers are normalized and you have literally billions of them to diffuse the error out over.
Anonymous No.107089268 [Report]
>>107087645
If you want that, you want either decimal or rational math. They have their downsides.