
You should use $dll(nvtemp.dll,1,0,N)
where N is the Core Slowdown Threshold value of your card as reported in your NVidia Control Panel
This should fix the problem.
Happy temp monitoring!
Moderators: _X7JAY7X_, caesar, IFR, mattcro, limbo
Yesterday, I took the code from caesar's plugin and modified it to try and work out how to detect SLI setups, but I just cannot get any information. I tried passing in card ID numbers ranging from the negative to positive limits of the API parameters, with no joy. Clearly "monitor number" does not equal card number. (I run SLI with a single TFT).caesar wrote:It seems that there is no documentation on nv.cpl API calls to get the temperature for SLI cards. The first parameter used in the plugin only selects the active monitor not the card from the system, that's why JJ gets the same temperature when he uses 0 and 1 for the parameter.
A big sorry guys but untill nvidia publishes documentation for SLI systems I can't do anything in the plugin to get temps from both cards...
yeah, but if u wanna use the everest plugin u need to have everest running.. not so convenient to me..Jumpin' Jon wrote:Using the latest version of EVEREST, I've been able to get all the GPU temps (and so much more)... See here