Leaderboard
Popular Content
Showing content with the highest reputation on 03/24/2018 in all areas
-
1 point
-
Perceived sensitivity
CaptaPraelium reacted to potato psoas for a topic
Okay, I'm just putting my thoughts into words here. I just want to figure things out for myself and you can read along to see if I'm making sense... One of the things I mentioned before was that if your monitor were able to shift to match your eye's FOV with the in-game FOV then you wouldn't need to change your cm/360 (I think - or was it the gear ratio...) - it would feel exactly the same (and across all points of the monitor). But because that ain't going to happen, we have to figure out how the distortion changes so that we can compensate for it. As a reminder, the eye-to-monitor distortion and the rectilinear projection distortion cancel each other out when the game FOV matches the eye's FOV, thus causing the sensitivity to be the same at all points on the monitor (as it is perceived by the eyes). Previously, I thought that each method had only one behavior, i.e. 0% MM only became faster at the edges and 100% MM became slower at the center, but adding the function of the eye into the equation instead gives us two behaviors as you move away from your eye's FOV - one for when you approach 180 FOV and one for when you approach 0 FOV. So I thought about it and I found this: 0%MM: The center is matched for every FOV, but as you approach 180 FOV the sensitivity gets faster at the edges and as you approach 0 FOV the sensitivity gets slower at the edges. 100%MM: The edge is matched for every FOV, but as you approach 180 FOV the sensitivity gets slower at the center and as you approach 0 FOV the sensitivity gets faster at the center. Keep in mind, monitor matching (MM) in this sense refers to angle matching, not distance matching, but the concept is still the same - we are just identifying how the distortion changes so we can figure out how to compensate for it. As you can see, there is still distortion, but the behaviors are different and when the distortion occurs depends a lot on sitting distance. One of the important things to remember is that there is a limit to distortion with 100% MM, as it is limited by the chord length as it refers to the arc length, whereas 0% MM approaches 0cm/360 at 180 FOV. But as I said before, this does not mean 0% MM is the inferior method. So in the end, there is still going to be distortion, but the approach you take determines the sensitivity. I think a lot can be said about your eye-to-monitor distance, as this can also exacerbate the flaws of each method. If you intend to use 0% MM, make sure your monitor is further away from your eyes, whereas if you intend to use 100% MM, make sure your monitor is closer to your eyes. I don't know the specifics but I'm sure there's a formula to help you find the sweet spot... but even then, there will still be distortion.1 point -
Helping to add more WPS option and league of legends support
AgentTryHard reacted to DPI Wizard for a topic
Working on this now, not quite sure how long it will take.1 point -
1 point
-
Amid Evil goes into early access on 3/12/18. Could you please add support for this game? http://store.steampowered.com/app/673130/AMID_EVIL/1 point
-
Perceived sensitivity
potato psoas reacted to CaptaPraelium for a topic
I'm sorry I've been sparse with the updates to this thread. This is largely because it's very time-consuming to make illustrations which are really needed to explain the progress I'm making. I'm old.... I've been doing it with pen and paper and such I might just get a camera and put pics of some of that here, rather than just nothing. Theoretically, 0% is perfect. For the image, on the screen. But what we see, is not what is on the screen. What we see, is on the inside of our eye. This is why 0% feels 'slow'. It does not account for the last step in the game world being projected from the monitor to our eye. We do not have any formula which do so, and accordingly, 0% is the most correct theory we have as of right now. As per the science nerdery posted above, we know that we do not measure distance between two points, in the real world or the game world, directly - as we would with say, a ruler, or by pixels on screen. We measure it by means of deriving the distance from the angle between two points. This is a terrible thing to attempt to explain without pictures, but I'll try, because it offers us two interesting insights. Firstly, it offers some validity to 'monitor matching', and secondly, offers some hint as to why it is that we seem to prefer to monitor match at the kind of percentages which we do. If none of this makes any sense, I'll do some cruddy mspaint to explain it Firstly, let's picture our monitor from above or from the side (it doesn't really matter, but I do it from the side because the games use VFOV) so we have a straight line. Now we need to measure our monitor and our seating position (assuming that your eyes are directly in line with the centre of the screen, which for the purpose of FPS games, they should be). We can use the following formula to find our actual FOV of the monitor. I sit 32.5cm from a 1440p 27" monitor (I can hear my mother telling me that's unhealthy), so mine looks like this: widthpx = 2560 heightpx = 1440 diagcm = 27*2.54 viewdistance = 32.5 <-- ^--- Yep, centimetres because science. Also I'm Aussie You can use inches, just don't *2.54 in the line above. heightcm = (diagcm/sqrt(widthpx^2+heightpx^2))*heightpx actualfov = 2(arctan((heightcm/2)/viewdistance)) = 54.70173510519102597649 Unsurprisingly, valve know their stuff (see links above) and I have adjusted my workspace to bring my FOV close to the natural 55-60 degree FOV where our eyes and brain treat the image as important (beyond this is our peripheral vision where we do not see so much detail but mostly movement, again see links above) So, now we can envision that there is a triangle formed between our eyes (well, our eye. We don't need to worry about stereo for this, so we just use the dominant eye) and the edges of the screen, and the angle from the eyes is as calculated above. Cool. But, let's imagine that angle is increased to say 80degrees (my hipfire FOV). In order for the triangle to meet the edges of the screen, our eyes should be much closer.... and if they are (ie, we move our head closer to the monitor), we see NO distortion. The distortion of the image is NOT caused by the projection. It is caused by the fact that our head doesn't move, to match the focal point of the projection. Here, we start to uncover the real reason WHY we feel the need to change mouse sensitivity when zooming, at all. It's about the amount of angle our eyes need to move, to cover the same amount of angle in the game world. This is distinct from, the distance our eyes move, to cover the distance between two points. Our brain doesn't work that way. It thinks of all distances as angles, which makes sense really, since it's all a matter of feedback from our eyes telling our brain how much they rotated. Now, if we take a few FOVs (in my testing I've been using actual, hipfire, 4x and 8x zoom) and measure out the distances to the focal points, we will have one very close to the monitor (hipfire), one where we sit(actual), one some distance behind where we sit (4x), and one very far behind us (8x). Guess what the ratios between those distances are? zoom ratio. Great And we already know, that Zoom Ratio/0% gives us perfect movement in the centre of the screen. So, why does it fail? Let's say, that we see a target which is half-way to the edge of our monitor. Let us not make the mistakes of the past and think of this as pixels or cm or inches, it is an angle. Our brains all agree on this In my case (using the same formula above and dividing the screen by half again), that's angle= 2(arctan((heightcm/2/2)/viewdistance)) ~=29.00degrees from the centre of the screen. So, now let's put this into effect using our hipfire, 4x and 8x zoom. Our eyes move 29degrees, how far do we need to rotate in game, to aim at this target? (yes, it can be simplified mathematically, but for the purpose of conversation...) We can calulate the focal distance from our screen, for a given FOV, using the following formula: opticallycorrectdistance=(heightcm/2)/(tan(fov/2)) So, I'll do that for my 3 example FOVs: hipdistance=(heightcm/2)/(tan(80/2)) = 20.03463865597708287603 fourdistance=(heightcm/2)/(tan(14.8/2)) = 129.4379759752501060469 eightdistance=(heightcm/2)/(tan(7.45/2)) = 258.21347922131382533488 And now we can just use the same formula above, with these distances, to calculate how far that ~29 degrees of eye movement amounts to, in the game world: actualfov = 2(arctan((heightcm/2/2)/hipdistance)) = 45.52095254923326167504 actualfov = 2(arctan((heightcm/2/2)/fourdistance)) = 7.43098865714869079575 actualfov = 2(arctan((heightcm/2/2)/eightdistance)) = 3.72894033006548981691 Ok that's well and good, but why is it important? This quick example, when we compare the results to those of 0%MM/zoom ratio,demonstrates that as our FOV decreases, the effect of the distortion on angular displacement decreases. So what? well, this tell us that the most important adjustment to our mouse sensitivity, is that made between the widest FOV - which is going to be hipfire - and our actual FOV of the screen from our eyes. As the FOV becomes smaller (higher zoom in game) the distortion is lower and lower and less and less meaningful. So, since we can NEVER make a perfect adjustment of sensitivity for all parts of the screen, because the distortion is not constant across the screen; but we can make an adjustment which is perfect for one part of the screen (this is why there is a percentage in monitor matching and a coefficient in BF and a zoom sensitivity in OW etc)... Which part of the screen is most important? If we say, the centre, then we use zoom ratio. But almost all agree, that 0% feels 'slow', and we know that is because of the angles vs distance thing. If we are CSGO or BF1 defaults, we use 4/3 aka 75% because muh feels. If we're the average OW pro, we use 18%. Why does everyone disagree? Well, if you take the hipfire FOV of a player, and his actual FOV, and work out your ratio from there....suddenly it all begins to line up with what 'muh feels' has been telling us all along. Sure, ANY variation from the optically correct distance from screen, for a given FOV, will introduce distortion; and that distortion will ensure that our mouse sensitivity will never be correct to any given point on the screen..... but the lower our FOV gets, the more zoomed in we get, the less of a difference it makes. The big difference, is that between our wide hipfire FOV, and our actual FOV of the screen.1 point -
Perceived sensitivity
potato psoas reacted to Skwuruhl for a topic
You literally said this. ? No? This reads like there's some extra multiplier reducing sensitivity even more after scaling sensitivity. (there isn't) *citation needed* Where did I say to call it horizontal? It's vertical. In every case of "1:1" it's vertical. Calling it 1:1 implies it perfectly matches sensitivity 1 to 1. https://prosettings.net/overwatch-pro-settings-gear-list/ has their monitors (and other meaningless shit like gaming chairs lmao). Distance to screen probably doesn't vary that much, especially in the tournament where they all have the same desk. Yes it is. Also that's interesting since usually people who think it's off think it's not sensitive enough.1 point -
Perceived sensitivity
potato psoas reacted to CaptaPraelium for a topic
Skwuruhl is the most knowledgeable in the field of mathematics and firmly believes that the mathematically correct ratio is the correct method to scale sensitivity. You are the most experienced in the field of pewpew and have constructed outstanding formula based on your experience. I am a programmer and am following a programmatic method to find a formula, by working my way backward from what we know and experience, to discover what we want to know and experience. You're quite right that, with experience, and 'getting used to it' (ie, practice), ANY sensitivity will do just fine. Heck, they didn't have any of these configuration options in the 80's and I did just fine blasting pixels at 640x480x16 colours I bet we've all used a cruddy joystick to land on tiny platforms after jumping over some dangerous object with perfect timing. Our brains will take care of it eventually, as we gitgud. It's also true that the optimal process will always be optimal; if we're going to get used to a scaling method, it makes sense to use that which is correct, and as it stands, that's zoom ratio. That being said, one can't help but wonder whether the reason why zoom ratio feels wrong is just bias (ie, we 'got used to' something else) or if there's more to it. Bias is incredibly powerful and I will not be at all surprised if that turns out to be all there is to it. Still, I am not satisfied with making any assumptions, be they that bias is the critical factor here or that it is not. I have a strong inclination towards skwuruhl's approach of 'math don't lie', but after spending 6 months in an attempt to "suck it up" and "get used to" using 0%MM aka zoom ratio, in hopes of overcoming my bias, I still felt that there was something else at play.... and I'm just that type of guy who has to ask, "Why?". I'm certainly not an expert in human visual perception, but I've spent a lot of time over the past few months learning as much as I could... and what I've learned has told me that there is no doubt that we all perceive the image differently, as a result of the distortion introduced by the projection. In my most recent posts, I have attempted to account for that distortion. As we know, zoom ratio is the mathematically correct method so scale between zoom levels, but these are not solid rectangles we're scaling, they are filled with an image. That image, is what gives us the perception that our sensitivity should scale at all. As valve put it, " an optically correct perspective can be obtained by matching the Camera's angle of view to the angle between the Player's eye and the edges of the image on his screen. Obviously this angle varies according to the actual size of the Player's screen and how far away from the screen he is actually sitting " It is a common analogy, to consider that the screen is a 'window' through which we view the game world, and where our angle of view of the screen matches the FOV in game, that analogy holds true... but once those angles differ, the analogy weakens. A more appropriate analogy, would be that the 'window' formed by our screen, is in fact a 'lens'. If we consider it as such, zoom ratio is the correct scaling method for us to use....but there's more to consider. If we were looking at the real world through such a 'lens', like a telescope with a big rectangular eyepiece, using zoom ratio would work just fine, because we would just be zooming an optically correct image into another optically correct image. However, in game, we are not doing this. We are zooming a distorted image. Consider the design of a zoom lens: The three lenses on the left are performing the actual zooming by adjusting the focal length, and the lens on the right focuses that image onto the eye (or film or whatever). For our purposes, zoom ratio describes the result of the three zoom system lenses, however, we do not have an analogous formula for the focussing lens on the right. At present, that is fixed, controlled by our distance from the monitor. To use another image to explain it.... First, the description of the image: "A varifocal lens. Left image is at 2.8 mm, in focus. Middle image is at 12 mm with the focus left alone from 2.8 mm. Right image is at 12 mm refocused. The close knob is focal length and the far knob is focus. " Zoom ratio, describes the difference between the left and centre images. We do not have a formula to describe the difference between the centre and the right image. ....I'm working on it1 point -
Perceived sensitivity
potato psoas reacted to CaptaPraelium for a topic
Yes, we can see that they are the same for the purpose of measuring the zoom, by your overwatch image I quoted right at the top of this thread, but the point of this thread, is that the distortion of the image has a perceptual effect on our brain (for an unrelated example high FOV makes you seem like you are running really fast) and accordingly there is more to the way we perceive the shift in FOV, than just the zoom amount. This is why it is important to understand the distortion fully, so that we can analyse the effects that such distortion would have on our perception. The reason I asked if the axes are treated the same, is that all of the formula I had found for rectilinear projection, imply stretching in the X axis which is greater than the stretching in the Y axis. As you say, this has no bearing on zoom amount, because as I said, the stretching is constant anyway (such as you've mentioned the same formula for zoom ratio will apply, even for the diagonal fov).... But, you have said that the stretching is in fact equal on both axes, and you are the first I have heard suggest this. We have both said in this thread, that the effect of having more distortion visible is related to the aspect ratio of the monitor, but there's more to it than that, as visible in this image where the aspect is vertical: You are taking that a step further and saying that there is no additional stretching in the X axis, when compared to the Y axis, regardless of what is visible. I asked if we have any evidence of this (because like you, I am attempting to avoid any pseudoscience here) and the funny thing is, I think I may have already posted it. As I've discussed above, the majority of literature available regarding rectilinear projection, pertains either to cartography or photography.... and it seems you've brought me to realise why this is - because the projection method we are using on computers, is not generally referred to as 'rectilinear projection'. Yes, it is "rectilinear" projection, in the sense that straight lines in the 3D world are represented as straight line in the 2D projection, but in the computer graphics field, the term I'm seeing more widely applied, is 'perspective projection'. This naming is to distinguish what we see in-game, where more distant objects appear smaller (as in real life), thus creating a sense of depth to the image; from 'parallel projection', which is used in computers for such tasks as CAD modelling, where the length of the lines must also remain intact, and this effect is achieved by means of mapping the 3D coordinates via parallel lines onto the 2D projection. As in, ALL of these are rectilinear projection: Only the first one applies to us. The distinguishing features here, are not only that straight lines are projected as straight (ie, it is rectilinear), but that parallel lines are not projected as parallel, rather, they represent a vanishing point (ie, perspective). So, now that I have the correct terminology, it's been much easier to find information regarding the nature of the projection....Well, I say 'find', but I should say 'found', because I already came across it. Some of it is even pasted above, and some I left out as it seemed to be irrelevant at the time. Now I know better, let's sort that out with some link spam: http://www.significant-bits.com/a-laymans-guide-to-projection-in-videogames/ https://en.wikipedia.org/wiki/3D_projection#Perspective_projection https://en.wikipedia.org/wiki/Camera_matrix https://en.wikipedia.org/wiki/Perspective_(graphical)#Three-point_perspective https://en.wikipedia.org/wiki/Graphical_projection#Perspective_projection https://www.math.utah.edu/~treiberg/Perspect/Perspect.htm https://www.cse.unr.edu/~bebis/CS791E/Notes/PerspectiveProjection.pdf https://en.wikipedia.org/wiki/3D_computer_graphics https://en.wikipedia.org/wiki/3D_rendering https://www.youtube.com/results?search_query=perspective+projection+in+computer+graphics just in case anyone prefers to learn visually, there are plenty of videos about this. TL;DR, the X and Y axes are treated the same way. All of this is actually good news for me as not only does it answer the question as to why the image behaves as it does when the camera is rotated around the Z axis (specifically, is the X axis fixed to the camera or the world horizon - answer: it doesn't matter), but it also negates any concern of unequal stretching between the X and Y axes. The illusory effect of forced perspective still comes into play here, but we are now down to a simple factor to consider - perceived visual angle, which is determined not only by the FOV of our image, but as skwuhurl has touched on, focal length... Or in the case of a computer monitor, We're talking about size of, and distance from, the screen. Valve touch upon this here: https://developer.valvesoftware.com/wiki/Field_of_View#Optimization and I have briefly touched on the effect of angle of view, perceived visual angle, etc, in the posts above. Essentially what all of this implies, is that in addition to considering the ratios between two zoom levels, aka fields of view on screen, we should consider that those zoom levels are in fact ratios of another field of view - that of the screen itself. There is an easy way to demonstrate this, so I'll cough up drimzi's image again: I don't know what FOV this image is (Lil help @Drimzi?) but suffice to say, it's big. The wall at the end looks tiny, and the signs on the left wall look huge. Right? Well...... Open this image fullscreen, and while you stay looking at the wall at the back, get your face right up against the screen. I mean, if your nose touches the screen, you're doing it right. (OK maybe not quite that far. but as close as possible) Notice the green sign doesn't look so distorted any more? That's because of the angle at which we're viewing it. I imagine the formula which takes this into account, would essentially be a 'screen to zoom ratio, to screen to zoom ratio, ratio'. And if that name didn't make you giggle you're taking this all far too seriously Basically what I have in mind is: Take screen dimensions (pixels) and size (inches/cm diagonal) Take distance from screen to eyes Do basic trig to find FOV of screen Use existing zoom ratio formula ScreenZoomRatioa = tan(FOVa/2)/tan(ScreenFOV/2) to find ratio between screen and first FOV (say, hipfire) Use same formula ScreenZoomRatiob = tan(FOVb/2)/tan(ScreenFOV/2) to find ratio between screen and second FOV (say, 4x zoom) Then simple ScreenZoomRatioa/ScreenZoomRatiob = PerceivedZoomRatio I'm throwing these ideas up as I type so there are likely issues and I'm sure that can be simplified... But it's getting late so I gotta wrap it up for now. I seem to remember @DNAMTE or @potato psoas were already working on a formula which took distance from screen into account. I don't want to reinvent the wheel or steal anyone's thunder here. If you guys have something going on this I'd love to hear about it Likewise @Skwuruhl I know you speak this stuff like a second language so if you can whip something up that would probably save me a lot of time. I have a feeling that an 'all day job' for me is a '5 minute break' thing for you hehe Otherwise I might go ahead and cobble something together sometime in the next couple of days.1 point -
Perceived sensitivity
Xander_99-40 reacted to CaptaPraelium for a topic
Crickets... Oh well, I'll go deeper. That should make things much worse. XD First, more food for thought. Do the lines appear to change in length? Some will say it looks like they are, some will say they don't. Last post, I wrapped up by talking about the amount of distortion at our two FOV's... In other threads, up until this day, there has been debate about which FOV should be the basis for the formula - vertical or horizontal. Over in the https://www.mouse-sensitivity.com/forum/topic/720-viewspeed-reworked/ thread, it was pointed out that the projection has the effect of using a set VFOV, and then 'filling out' the horizontal plane, to fill our monitor. This could imply that we should be using the VFOV, since HFOV varies with monitor aspect ratio. This makes sense, seeing as movement from one point to another that is within the VFOV limits, even in the horizontal plane of the world, should require the same amount of movement of the mouse, regardless of how much peripheral vision extends to the sides of the monitor - in other words, given the same VFOV setting, aiming 1cm to the left should feel the same no matter if we have a 16:9 or a 21:9 or a 32:9 monitor ... But that assumes a positive aspect ratio (monitor wider than tall). What if we consider the centre monitor in one of those cool triple-vertical-monitor setups? Better yet, what if we are in an aircraft/spacecraft, and roll it to the side? Our monitor vertical FOV is now in the horizontal plane of the game world, and vice versa, and the amount of distortion in those planes does not vary. Only the visible amount of distortion in the projection to our monitor varies. So, we can just use the vertical FOV again, since it is always the angle which provides the constant amount of distortion no matter the aspect of our screen. However, if we ignore the distortion in the horizontal aspect of the monitor, we are ignoring the very effects upon our perception which render 0%MM ineffective. Consider the aforementioned effect of the Odessa Steps. This same effect would be just as apparent, if we turn our head to the sides. Then again, if we use the horizontal distortion as a basis, we are now including distortion which is not always visible, and in the most common case of a positive aspect ratio monitor positioned horizontally (read: not a vertical screen or a rolled-sideways spacecraft), this distortion is usually not visible. Indeed, it turns out that this is different for different people. Current studies suggest that the difference is caused by ...wait for it... eye pigmentation. Yep, the colour of your eyes makes a difference here. Does this mean we need some kind of coefficient to allow for blue vs brown vs green eyes? Well, fortunately not. See, the different eye pigmentation does appear to effect our perception of the distortion (see https://en.wikipedia.org/wiki/Müller-Lyer_illusion for more info), and other effects have been cited as the reason for the illusion, but all agree that the perception is uniform in manner. This effect does not vary between vertical and horizontal or any diagonal inbetween. This can be seen in the cool animated image at the top of this post. Whether you see growing/shrinking lines, you'll see the same on all of them. Why is this important? Because it tells us that we do not need to account for the differences in individual perception of the distortion. We need only account for the differences between the distortion in each axis. So, once again, analysis of the sciences tells us that the path to an answer, lies in a ratio between the distortion in each FOV. But to what extent do we measure that FOV? The monitor? A square? the vertical square or the horizontal? Maybe a circle?........ And how do we measure the distortion, since it's not the same all the way to the edges but increases as we diverge from the centre? But that's a topic for another post on another day. I have badguys to pewpew1 point