Jump to content

Vols and Jezuz

Members
  • Posts

    66
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Vols and Jezuz

  1. You don't convert from CS:GO to PUBG with the calculator for any of the steps after you have the PUBG Hipfire sensitivity. You just select the first game as PUBG (Config File), pick Scoping as the Aim, and then adjust sensitivity until you hit the Scoping target 360° rotation. When I do this with your CS:GO sensitivity and zoom ratio, I get 0.010890 for Scoping. For people using CS:GO zoom sensitivity ratio of 1, all the values will end up being the same except for Scope 2x (barely), 8x, and 15x. I'll just do all the values for you because it doesn't take long now that I have a method down. Normal: 0.010890 VehicleDriver: 0.010890 Targeting: 0.010890 Scoping: 0.010890 Scope2X: 0.010891 Scope4X: 0.010890 Scope8X: 0.012017 Scope15X: 0.013613 Edit: calculator updated.
  2. Yeah it makes sense to wait. Here is my method for converting CS:GO sensitivity to PUBG, and it should work for other Source games. Where it gets interesting is that I preserve the behavior of the scoped mouse sensitivity from CS:GO and its zoom_sensitivity_ratio_mouse. While Valve's method for handling scoped sensitivity is far from perfect, as has been discussed ad nauseam on this forum and other places, it is the behavior that feels most natural to me now after years of playing CS:GO and TF2. After refining this method and trying it out extensively in-game, I'm extremely satisfied with how familiar and seamless the aim feels across all scopes/FOVs. All the other methods I've tried that use Viewspeed or various Monitor Distances just did not feel right across all scopes/FOVs. Note, this method was made for people who primarily stay in third person except to engage in gunfights. This will need a little tweaking once first person only servers come out. Also note that I've included corrected magnifications for the 8x and 15x scopes, as discussed a few posts above. Edit: these steps are now obsolete, check this post for updated instructions. 1) Convert your CS:GO 360° rotation to PUBG Hipfire using the calculator. For me, 1 sensitivity @ 900DPI = 46.1818cm per 360°, which gives 0.009900 for PUBG Hipfire. The reason that I've used PUBG's third person (Hipfire) for 360° rotation conversion is because it is the mode you will almost always be in when you need to do large flicks. If you hear gunfire coming from 100m directly behind you, for instance, almost everyone will turn the 180° in third person, because the camera angle and increased FOV help you locate where the gunfire is coming from. Then once you have located the enemy, you switch to ADS or scope in to engage them, for the increased clarity from the zoomed in FOV and lower spread/deviation. So the muscle memory you have built in CS:GO for snapping to precise angles aligns best with PUBG's third person (Hipfire). 2) Hipfire is named Normal in the config file. For VehicleDriver and Targeting in the config file, use the same sensitivity that you calculated in step 1 for Hipfire, since they are all the same FOV. 3) Go to this Google spreadsheet and then "File" > "Download as" so you can have a copy to edit. In the green boxes, edit the red text to enter your personal CS:GO cm per 360° from the calculator and zoom_sensitivity_ratio_mouse (my values are just there as an example). 4) To make zoomed FOV sensitivity behave like CS:GO and other Source games with zoom sensitivity ratio, the target 360° rotation for PUBG Scoping is calculated in cm by starting with the Hipfire 360° rotation and multiplying it by the FOV magnification, then dividing by the zoom sensitivity ratio. This way, the change in a zoomed FOV's sensitivity changes according to the FOV in the exact same way as with the various zoom levels in CS:GO. Sniper rifles' first zoom, AWP second zoom, SSG 08/G3SG1/SCAR-20 second zoom, and AUG/SG 55 zoom are all different magnifications and FOVs in CS:GO, but are all controlled by the same zoom_sensitivity_ratio_mouse value. 5) EDIT - For these last 2 steps, only pick PUBG as the first game in the calculator, don't pick CS:GO for the first game and PUBG as the second game to convert. To get your PUBG Scoping sensitivity, pick Scoping under Aim in the calculator, enter your DPI, and manually alter the sensitivity value until the green 360° rotation value in the CALCULATIONS box is as close to the spreadsheet's Scoping target 360° rotation as possible. Keep in mind that the game only uses six decimal places (0.xxxxxx) for config values, so don't waste your time getting more exact than that. This might sound like it would take forever, but it goes quickly once you get a method down. 6) Scope 2X/4X/8X/15X target 360° rotations are all calculated similarly to Scoping in step 4, but they are calculated relative to Scoping 360° rotation. So just repeat step 5 for the remaining sensitivities, matching the calculator's green 360° rotation values to the spreadsheet's target 360° rotation values. Make sure to change the calculator's Aim selection for each Scope level, as the same sensitivity value gives different 360° rotations for each of the Aim choices. When you are all done, you should end up with a list of values similar to mine below. You will need to look up how to edit PUBG's config file if you are not familiar with it, and you will probably have to make it Read-only or the game will probably revert your values at some point in time. I suggest making a backup of the config file in case this ever happens. Normal: 0.009900 VehicleDriver: 0.009900 Targeting: 0.009900 Scoping: 0.008749 Scope2X: 0.007732 Scope4X: 0.007732 Scope8X: 0.008532 Scope15X: 0.009665
  3. I used my strategy in the previous post and was able to snag an 8x scope to take some screenshots and mouse data. I derped a little bit and did it versus third person instead of first person, but I suppose it's still fine. Here are the images for comparison if anyone wants to do any calculations with them. I tried but got too confused by the trigonometry to get much out of them. Third person 8x scoped in 8x scoped in overlayed on third person I also took MouseTester data to get the total x-counts in a 360° rotation. Here are the images of the points I used to start/stop on while collecting MouseTester data: third person, 8x scoped in. For third person, it was a total of 16362 x-counts, which converts to 46.177cm per 360° with my 900DPI. This is very close to the 46.1818cm per 360° from the calculator for my 0.009900 hipfire sensitivity. For 8x scoped in, it was a total of 173621 x-counts, which converts to 489.997cm per 360° with my 900DPI. This is not close to the 540.6461cm per 360° from the calculator for my 0.007732 Scope 8X sensitivity. However, if the calculator treated the Scope 8X sensitivity as having 7.25x magnification, then the calculator would give 489.9608cm per 360°, which is very close to my measured 489.997cm per 360°. So I think we can conclusively state that the 8x scope is indeed 7.25x magnification as reported on PUBG.ME, and we can furthermore anticipate that the 15x scope will be 12x magnification. To demonstrate the difference this makes, the 0.007732 sensitivity I was using for Scope 8x sensitivity should actually be 0.008532 (= 0.007732 * 8 / 7.25) to take the 7.25x magnification into consideration. I was also using 0.007732 for Scope 15X sensitivity, which should actually be 0.009665 (= 0.007732 * 15 / 12) to take the 12x magnification into consideration. The various sensitivities I used for PUBG come from my own method, which I will make a post for soon, but the bottom line is that this effects everyone who used the calculator for deriving Scope 8X and Scope 15X sensitivities. Oh, and somehow I managed to win that game after I collected these screenshots and mouse data even though I was just trying to find an 8x or 15x scope for this test. Just goes to show, man's best laid plans in PUBG
  4. It's just a technicality, practically speaking. As long as the calculator has the correct FOVs and the correct sensitivity/360° rotation ratio for each magnification level, the actual magnification levels one might calculate are inconsequential unless I'm missing something. Though of course it would be nice if the official in-game magnification labels and actual, correct magnifications matched. I've seen CS:GO and other Source games just use the FOV ratio instead of the tan(FOV/2) ratio, and I think Arma as well (there's other games but I can't remember). So they might not even realize they are using an antiquated method for calculating magnification.
  5. If I find an 8x or 15x scope while I'm playing, can I just take screenshots of being scoped in versus first-person unscoped, without changing my aim, as a quick and dirty way to investigate the magnifications by counting pixels of an object?
  6. If you fail to find a custom server, the quickest way to find a 15x scope would be to parachute to a vehicle and wait in a safe spot near the middle of the map, listening for the planes. When you hear one, stalk it until it drops the package and try to nab in and get the loot. Obviously this is high risk, but you can probably get lucky and nick a 15x in 2-3 games. Then drive to the middle of nowhere to do your analysis. Another peculiarity I noticed with the calculator is that the 4x/8x/15x scopes work out to 4.00x/8.00x/15.00x magnifications if you use 360° rotation and Actual HFOV compared to Scoping. But for 2x scope, the 360° rotation gives a 1.80x magnification while the Actual HFOV gives a ~1.82 magnification. Is there a minor miscalculation that's happening with the 2x scope values, or is this a quirk that originates in the game code, meaning your calculator values are all accurate for the 2x scope?
  7. @DPI Wizard Have you seen PUBG.ME? It lists the 2x scope as having 1.8x magnification and the 4x scope as having 4x magnification, both of which agree with your calculator. However, it lists the 8x scope as having 7.25x magnification and the 15x scope as having 12x magnification. Supposedly those values were datamined from the game values, so I'm wondering if you have verified in-game that the 8x/15x scopes indeed have matching 8x/15x magnification, as used in your calculator's formulas? I'm guessing that either your calculator is more up-to-date and the values have changed since PUBG.ME datamined them, or the values were changed after you set up the 8x/15x scope calculations and PUBG.ME ascertained the updated values through their datamining. Also I've come up with what I believe is a superior method for transforming CS:GO's (and other Source games) sensitivity and zoom sensitivity behavior to PUBG's sensitivities. I want to wait on sharing it until the 8x/15x scope magnifications have been confirmed.
  8. No offense but your experience with the old Rinput/sourceGL isn't really relevant anymore because the new ones are considerably better in terms of accurately handling raw mouse data with ~0 packet loss. Now if the new RInput/sourceGL controlled noticeably worse than m_rawinput 1 in L4D2, then that would be something that might concern me more.
  9. I've had the exact same thought before. Calling SetCursorPos with raw input seems like such a bizarre work-around as opposed to simply calling ClipCursor. Maybe ClipCursor has higher overhead or some other strange side effect...? Random thought, but we could make an injectable DLL that peeks and discards SetCursorPos calls by the game and triggers ClipCursor being called for m_rawinput 1. Or even more complicated, we could make RInput detect the m_rawinput setting and work normally if it's 0, and not register itself for raw input or detour/trampoline Get/SetCursorPos and instead use the ClipCursor change I just mentioned if it's 1. Not sure if either of these ideas are entirely possible or worth pursuing though.
  10. I'm assuming this is with the newly released RInput v1.35/sourceGL v2.06? Like I said, using RInput with or without sourceGL would be the exact same in-game, so if there is some problem in L4D2, it's related to RInput and not sourceGL per se. Have you used RInput with CS:GO or any other game? If so, is it just L4D2 that doesn't feel right with it?
  11. RInput involves extra calls to the Win32 API functions GetCursorPos and SetCursorPos to feed the game the raw input data that m_rawinput 1 does not. The 'old' v1.31 RInput suffered packet loss because it would call GetCursorPos and read the raw mouse data, but it would wait until SetCursorPos was called to reset the raw x and y accumulators. Meaning any raw data collected during polls in between these two calls was ignored. The way the Source engine is coded with m_rawinput 0 to translate cursor position changes to in-game aim, the two cursor function would ideally occur sequentially without interruption. The phenomenon of dropped mouse data packets was roughly proportional to fps_max, because the more frames that were rendered, the more work the CPU was having to do and thus the higher occurrence of something requiring CPU wall time in between Get/SetCursorPos. The new RInput fixed the packet drops by instead resetting the raw x and y accumulators during the GetCursorPos call (along with another more subtle and difficult to explain change in how it accumulates the raw input data). Still, RInput requires the two extra Win32 API calls that are vulnerable to high CPU utilization and being delayed momentarily by higher-prioritized processes. Now I really am not knowledgeable enough about Windows API, Source engine frame rendering, how the streaming program(s) work, or CPU prioritization to know exactly could be going on if what you're saying is true, but I imagine it's one of these possibilities: 1) streaming is causing frequent ~100% utilization of the CPU core that is also handling Get/SetCursorPos, so those functions are subject to having to wait for CPU wall time, which could either delay the frame rendering they're being called for, or cause that cursor change to be delayed to the next frame (thus inducing 1 frame of input lag) 2) your streaming implementation isn't causing ~100% core utilization happens to be particularly taxing on the Windows API, causing some kind of collision with the Get/SetCursorPos timing 3) the streaming processes have higher priority than the game process, so calls such as Get/SetCursorPos are ceding time to the streaming processes during high CPU utilization (this possibility could be easily fixed by setting the streaming processes lower priority than the game with Prio) Whatever the case, you could invest ~$20 in a Teensy and easily program it to perform exact patterns of mouse input to demonstrate with empirical evidence what you say you are experiencing.
  12. sourceGL injects the RInput dll into the game exe just like RInput.exe. Therefore sourceGL having a higher priority can only hurt things, if it were to take CPU time away from the game, but this shouldn't really happen because like I said, sourceGL is basically not doing anything after game startup. And with the way m_rawinput 1 and RInput work, mouse packets shouldn't be lost if the video encoder is taking some CPU time away from the game. Instead, it would just cause fewer FPS, so the mouse packets could have slightly more input delay just from less FPS.
  13. sourceGL is basically idling once a game is launched, so it's probably best to have it at Normal or Below Normal priority. Using sourceGL for RInput versus just using RInput.exe will have no difference in packet loss, sourceGL simply automates the process.
  14. Teensy tests show the new RInput has <0.001% packet loss across all FPS. sourceGL has been updated with the new RInput: http://sourcegl.sourceforge.net/ Here is the new RInput dll on github: https://github.com/VolsandJezuz/Rinput-Library/releases/tag/v1.35 This is unnecessary imo
  15. I had some interesting findings with how the average absolute** packet discrepancy of m_rawinput 0 and RInput varies with fps_max value. I'm not sure exactly what causes variance in m_rawinput 0, but with RInput it involves DLL calls to Windows functions, so if the input data read occurs between the tiny gap of time in between GetCursorPos and SetCursorPos, then that packet will be lost. So the more frames that are being rendered, the more often this occurs. So I'm not using fps_max 375 with RInput via sourceGL because that fps will almost always stay over 300, under which I can detect a difference in aim feel/appearance, but keeps the average packet discrepancy lower than higher fps caps. qsxcv has mentioned that it would be possible to update the RInput code to fix **Previously I had been erroneously calculating all discrepancies as loss, because my script was taking the absolute value of the discrepancy in angle (because I had noticed that quite often it will be something like -0.000014, due to minor rounding errors and such). After looking through the data I noticed is that m_rawinput 0 would occasionally have positive packet discrepancy, almost like it was experiencing random positive acceleration. So I've shown histograms of the packet loss % occurrence. You can see that these positive discrepancies do not occur with RInput. Note that m_rawinput 1 has 0.000% loss invariably in all my tests, so I have not included it's data since it would always be the same. Also, these tests are for 1,000,000 data packets, or 1000 separate collections of 1000 ms data collections of 1 ms packet interval (1000 Hz). Final note, with fps_max 999 I get ~600-800 fps. m_rawinput 0 histograms: fps_max 60 http://i.imgur.com/sP1bdus.png fps_max 120 http://i.imgur.com/ucsQqCG.png fps_max 300 http://i.imgur.com/g5OaihW.png fps_max 375 http://i.imgur.com/fEXW70F.png fps_max 500 http://i.imgur.com/90theYv.png fps_max 999 http://i.imgur.com/zuSZ4cS.png RInput 1.31 histograms: fps_max 60 http://i.imgur.com/1xRq7Wm.png fps_max 120 http://i.imgur.com/yO7ed60.png fps_max 300 http://i.imgur.com/76EAgsP.png fps_max 375 http://i.imgur.com/gdd9hZS.png fps_max 500 http://i.imgur.com/yUbSlBB.png fps_max 999 http://i.imgur.com/VCMH0cD.png
  16. Vsync off. I don't think very many, if any, serious CS:GO players will use Vsync, so I didn't collect any data for Vsync on. The added input lag is simply not worth whatever improvement there may be in packet discrepancy. My intuition is that 100k packets at sensitivity 0.0163636364 would give the same results with the Teensy, but I'll have to try that another time as I am all data'ed out. I've read that the source engine doesn't deal well with sensitivities < 1, so perhaps that's why your results varied, but I'm not sure I believe that. It could also just be because of the inadequacies of attempting to emulate USB polling with a script. I tried giving my script I previously mentioned a CPU priority everywhere from Low to Realtime, and it still gave much larger packet loss %'s than when using actual USB polling with the Teensy.
  17. Intro Hope you don't mind me stealing your formatting but my goal was to simulate 500/1000 Hz mouse data with actual 500/1000 Hz USB polling using a Teensy 2.0. Test system CPU: Intel Core i7-3770K Memory: 16 GB GPU: Geforce GTX 680 | Driver: 347.09 Beta OS: Windows 7 Ultimate SP1 x64 Game: Counter Strike: Global Offensive (Steam version), Exe version 1.35.0.2 (csgo) Map: Dust II Testing procedure The example USB Mouse code from the official Teensy site was modified to send 1 s of continuous input data (500 packets of 20 x-counts for 500 Hz, or 1000 packets of 10 x-counts for 1000 Hz), send a mouse button click, collect the data, calculate the packet discrepancy, then reset and restart the loop. At sensitivity 1.63636364, these 1 s intervals of input data will give 360° rotation in-game. Thus, by using "setang 0 0 0" at the start of each iteration, then "clear; getpos; condump" as triggered by the mouse button click after each 1 s interval of input data, the discrepancy in the view angle as collected from the console dump text file can be calculated, and the results averaged over all intervals in the loop. Results The presented data is for 100 x 1 s data collection intervals, which is 50,000 mouse data packets for 500 Hz and 100,000 mouse data packets for 1000 Hz. Conclusion As I suspected, the mouse data packet loss as previously reported was greatly exaggerated by the script's shortcomings in accurately modeling USB polling data. I believe using Teensy 2.0, which can not only send input data with about the same precision as a 1000 Hz USB mouse (1 ms update interval) unlike the script but can also actually send the data via USB polling to accurately reflect its CPU prioritization, is a far superior method for calculating mouse data packet loss. While m_rawinput 1 undoubtedly still provides the most accurate translation of mouse data, with practically no packet loss, RInput and m_rawinput 0 only have ~0.2-0.3% packet loss. This is about 10x less than was measured using the script simulation. While RInput may have a slight packet loss, some users perceive a very noticeable improvement in the input lag and/or the difficult to describe overall 'mouse feel'. I suspect that perceived difference in RInput versus m_rawinput 1 is very dependent on the individual's unique peripherals/hardware/firmware/drivers/software/OS/game configuration. If packet loss was ~1-3% as previously described, then it would be hard to justify this tradeoff. But considering the findings of this testing that it is more on the scale of 0.2-0.3%, I believe there can be enough improvement for some users in perceived input lag and/or mouse feel to be worth the small amount of packet loss associated with RInput. The actual mouse packet discrepancy for RInput, as measured by actual 500/1000 Hz USB polling, is about an order of magnitude less than was previously presented. There is indeed still a difference from m_rawinput 1, which provides no packet loss, but it's generally around 0.2-0.3%, which for some users may be worth the tradeoff for less input lag and/or better mouse feel as compared to m_rawinput 1.
  18. That's the exact type of thing I was worrying about for the Logitech script. My script actually does 1000Hz (I was able to measure the average update interval as something like 1.0005ms), but it still seems that something is interfering in the low-level functioning that I want to troubleshoot. My script is an automated loop of runs of 1000 mouse data packets to turn 360* in one second and collecting the discrepancy in getpos via condump. The vast majority of these loop iterations yield a packet loss for RInput/m_rawinput 0 that is on the order of 0.1% or less, but there are these rare instances when the packet loss for a loop iteration will be 10% or higher, which is literally 10% of the mouse input data being lost over an entire second. Now I know this doesn't actually occur with a real mouse, because that kind of catastrophic degradation of input data would be so distinct that it would almost be like a severe lag spike or a sensor malfunction. I have updated my script, but not ran it yet, to also collect statistical things like the max packet loss per loop iteration and the sample standard deviation to help tell how often these catastrophic events are occurring and to help analyze how well the script is emulating 1000Hz USB polling. And I may even spend some time doing some detective work on what exactly is crapping on m_rawinput 0/RInput when we are running our scripts. Worse comes to worse, I will just do some elementary data to at least show my findings with (what I feel) is an improvement to the Logitech script, but unless I am able to find an explanation or work-around for whatever is causing that unrealistic data for m_rawinput 0/RInput, I don't think I will be satisfied with anything less than the Teensy experiment. Unfortunately, I am going out of town for the weekend so it is unlikely I will make any progress or be able to collect/post data until next week. I had two other such camera tests people had done (one of which I was talking about in my previous post) bookmarked that I felt like were reasonably close to being as experimentally sound as yours. When I have more time, I will do some deep googling and reading back of relevant threads to try and remember how exactly I stumbled upon or was directed to them.
  19. I'm not just basing that on your finding, I've also seen high-speed camera, full-chain measurement of 1-2ms consistent input lag for m_rawinput 1 versus m_rawinput 0. I had it bookmarked along with a wealth of other good source materials off which to base my future experiments and measurements, but unfortunately they were rip'ed along with several things when my Windows install ate itself a few months ago and I had to revert to a full system image from April. I'm trying to find it, but I'm fairly certain it was from one of those obscure Japenese sites. Regardless, I test out and swap between m_rawinput 0, RInput, and m_rawinput 1 on a fairly regular basis, and I'm just about as positive as I can be that (on my system at least) that there is some kind of input lag or smoothing or buffering or whatever you want to call it going on with m_rawinput 1. It just feels distinctly different to me in my vast personal experience with it. For reasons I could probably expound upon but is probably not worth mentioning here, I almost prefer it for AWPing, while in general I prefer RInput. Obviously an A/B/X test would best exceedingly difficult if not impossible for RInput v. m_rawinput 1 (v. m_rawinput 0?), though an A/B test would be possible for me to implement in my program, but I feel confident I would have similar ability to differentiate between them as I did with m_rawinput 1 versus m_rawinput 0 in your CS:GO A/B/X procedure. Acquiring my Teensy was a step in the right direction for providing some better raw data on the matter. But eventually I hope to also obtain and oscilloscope and high-speed camera that is capable of full-chain input lag measurement. I think only then when will be able to ferret out exactly what is going on here.
  20. I just got a Teensy, which if I'm understanding correctly, I'll be able to do essentially what our scripts are doing, but sending it through USB to get the full polling effect and not having to rely on scripts which mimic USB polling and CPU prioritization with timers and such.
  21. I wrote a script that performs the exact same core mouse movement inputs as the Logitech script, but I suspect that it is much more efficient/accurate at mimicking 1000Hz USB polling, for several reason I will go into when I do a full post of my data similar to the OP. This inefficiency/inaccuracy would be exacerbated for m_rawinput 0/RInput because of the effect on DLL calls that must be very precise, and those two require 1 and 2 extra DLL calls respectively that m_rawinput 1 does not. More details on this as well as my script to follow in the full findings post. But for a teaser, my initial finding from 100,000 mouse data packet runs, is RInput has ~0.42% packet loss (VSync off, 1000Hz/1ms update rate) as compared to ~0.002% for m_rawinput 1. This data would certainly make the tradeoff for m_rawinput 0/RInput, of inaccuracy from packet loss to improve ~1.5ms reduced input lag (relative to m_rawinput 1), much closer to being a debatable choice with no clear-cut winner.
  22. Very interesting data, thanks for your hard work. I still can't help but feel that the packet loss is something that can vary greatly depending on one's specific hardware/firmware/software/OS/game configuration. One of the biggest initial things I see, is the fact that VSync makes such a big difference, which makes me question how much other factors are affecting the data I am eager to perform your experiment on my own computer. Is it possible you could provide the scripting file and instructions for your exact methodology using the Logitech software? Also I'd like to say that 1.5ms input lag added between the mouse data read and the frame render is not as insignificant to some as you stated. Some people seem particularly sensitive to this and would still find the tradeoff of slightly less accurate mouse movement by packet loss worth the tradeoff for being mouse movement feeling more responsive and 'connected'. And that's interesting about RInput 1.2 versus 1.31. I want to look into this more myself. Thanks again for your contributions
  23. Link should be http://www.mouse-sensitivity.com/forum/topic/342-counter-strike-global-offensive-m-rawinput-vs-rinput/ Otherwise it's not working for me
  24. I'm guessing RInput loss will be closer to an order of magnitude lower than your original figures after you have all the data. As you've already stated that your initial 5% figure is off, I think you should consider editing the original post because a lot of people will just read the first few posts and take it as the word of god. If a small amount of packet loss is demonstrated with RInput, I still would see it as a tradeoff with m_rawinput 1, with the former having small packet loss while the latter has added input lag. It's been pretty convincingly demonstrated that m_rawinput 1 has a flat ~1.5ms input lag penalty, which may sound insignificant, but when you've fine tuned your aim to rely on extremely fast and precise flicks, twitches, and grandiose swipes, it can be perceptibly detrimental. I'm currently working on trying to implement some kind of A/B blind test into sourceGL so that we can at least get some practical-use user statistics for detecting/preferring one implementation of raw input over the other. Off the top of my head, some things I would like to see tested with your methodology (which I presume you will thoroughly expound upon when you release your collected data): effects 1) across varying hardware/OS/software setups (this may not be reasonable for you to do if you only have one primary system), 2) of mat_queue_mode 0 v. 1 v. 2, 3) of having CS:GO process priority set to AboveNormal and High, as well as Steam processes set to BelowNormal, 4) of having Steam overlay disabled/enabled, and 5) of having things like minimal and optimized Windows services running and no fan controller software (static fan speeds). 'ppreciate your efforts in providing us we some experimental data and I look forward to seeing your results!
×
×
  • Create New...