To easily shutdown/restart/sleep a windows machine via remote desktop, click on the desktop and press Alt+F4. This will bring up the standard Windows shutdown prompt (where you can choose shutdown, restart, sleep, etc). (I didn't know that!)
More info here.
Wednesday, November 24, 2010
Sunday, November 21, 2010
"Aero Snap" (sort of) in FVMW
I really like Windows 7's "Aero Snap" feature. I never drag windows, but I use Win+Left/Right all the time, and it really bugs me when I use another system and it doesn't do anything.
Here's how to do it in FVWM:
FVWM will remember the old window size for you (but will follow your window placement rules when positioning it).
Here's how to do it in FVWM:
AddToFunc LeftHalf
+ I Move 0 0
+ I Maximize 50 100
+ I Raise
AddToFunc RightHalf
+ I Maximize 50 100
+ I Move 50 0
+ I Raise
Key Left A 4 LeftHalf
Key Right A 4 RightHalf
FVWM will remember the old window size for you (but will follow your window placement rules when positioning it).
Thursday, November 18, 2010
VSTS data bindings and unicode
I have a test with a data source (settings.csv) and a data binding (mysetting). Settings.csv exists, and there's an entry for 'mysetting', but VSTS complains:
Turns out it's because the csv file is Unicode. The easiest way to fix this (for me, at any rate), is Powershell:
Compare the sizes of settings.csv and temp.csv with ls. If temp.csv is smaller, then settings.csv was unicode.
The truly maddening thing is that VS seems to be saving out as Unicode itself, so I have to edit my csv files elsewhere.
Error...Could not run Web test 'test' on agent 'agent': Could not access table 'settings#csv' in data source 'settingsSource' of test '12345678-abcd-abcd-abcd-12345678abcd': No value given for one or more required parameters.
Turns out it's because the csv file is Unicode. The easiest way to fix this (for me, at any rate), is Powershell:
cat settings.csv | out-file temp.csv -encoding ascii
Compare the sizes of settings.csv and temp.csv with ls. If temp.csv is smaller, then settings.csv was unicode.
mv temp.csv settings.csv -force
The truly maddening thing is that VS seems to be saving out as Unicode itself, so I have to edit my csv files elsewhere.
Fullscreen flash in linux freezes video: partial solution
Linux flash has problems with fullscreen on my laptop with an integrated intel graphics chipset. Most of the time I fullscreen it, the audio keeps playing but the video freezes.
You can solve the problem by disabling hardware acceleration, but that noticeably impacted my video quality (dropped frames, pixelation). However, it appears that a combination of changing focus to other windows and quickly fullscreening/un-fullscreening will make it work. Unfortunately, I haven't found a set of steps that works every time...usually, I just click around a lot until it works.
You can solve the problem by disabling hardware acceleration, but that noticeably impacted my video quality (dropped frames, pixelation). However, it appears that a combination of changing focus to other windows and quickly fullscreening/un-fullscreening will make it work. Unfortunately, I haven't found a set of steps that works every time...usually, I just click around a lot until it works.
Sunday, November 14, 2010
Putting CG shaders where VS can find them
For a while, I was having problems running my CG tests out of visual studio. It couldn't find my shader files, so I had to run from the command line. Not only did this add some steps, it meant I couldn't debug.
Turns out, all I had to do was move the shader to where VS creates new classes by default (another level deeper in the directory than where I normally put my source code). I went ahead and moved all my code and shaders there, and now it will happily find the shaders when I run via F5.
Turns out, all I had to do was move the shader to where VS creates new classes by default (another level deeper in the directory than where I normally put my source code). I went ahead and moved all my code and shaders there, and now it will happily find the shaders when I run via F5.
Thursday, November 11, 2010
Vertical Split (sort of) in Microsoft Word 2010
Word (along with other Microsoft products, such as Visual Studio) only allows you to "split" a document horizontally (that is, into two sections on top of each other). Since the trend these days is towards widescreen monitors, it would really be more helpful to split vertically (that is, two sections side by side).
You actually can do this in Word (and VS), they just don't call it "split". The paradigm seems to be making "new windows", which you can then position side by side yourself (VS's inner window management does this for you).
In Word 2010, go to "View" on the Ribbon and click "New Window". If you have Windows 7, then you can use Win+Left and Win+Right (or drag the windows into the corners) to make each window take up half the screen.
A true vertical split would let you resize both views by dragging one separator, but this works well enough most of the time.
You actually can do this in Word (and VS), they just don't call it "split". The paradigm seems to be making "new windows", which you can then position side by side yourself (VS's inner window management does this for you).
In Word 2010, go to "View" on the Ribbon and click "New Window". If you have Windows 7, then you can use Win+Left and Win+Right (or drag the windows into the corners) to make each window take up half the screen.
A true vertical split would let you resize both views by dragging one separator, but this works well enough most of the time.
Friday, October 15, 2010
Increasing VSTS limit on old test run results
By default, Visual Studio Team System will only save 25 of your test run results. To increase this number:
Note: you still may have a practical limit if you use the freely included SQL Express (default). I think the artificially imposed limit is something like 5GB, not sure.
- Tools -> Options -> Test Tools -> Test Execution
- Change the value after "Limit number of old Test Results to:"
Note: you still may have a practical limit if you use the freely included SQL Express (default). I think the artificially imposed limit is something like 5GB, not sure.
Tuesday, October 5, 2010
Handy Visual Studio shortcuts
When I discover something nifty in Visual Studio, I'll post it here:
For now, more goodness here
Shortcut | Name | Comment |
Ctrl + . | Intelisense | If you've typed enough of a word that there's only one completion available, it'll finish it for you (or bring up a list of other possible completions) |
Ctrl + , | Navigate To | Search project (solution?) for member and function definitions |
For now, more goodness here
Saturday, September 18, 2010
"Show All Files" crash work-around (Visual Studio 2010)
When I upgraded to Visual Studio 2010, I hit the following problem: after converting my old project to the new format, I made the mistake of clicking on "Show All Files". Once you've done this, it seems you can't go back -- any time I tried to deselect it, VS crashed.
Here's how I fixed it:
In my case, this ended up being
Here's how I fixed it:
- Open up PowerShell in your project folder
- Run
ls -rec | select-string Show
- Use your favorite text editor to set all references to "ShowAllFiles" to false
In my case, this ended up being
[ProjectName].vcxproj.user
, but do the Select-String search to be sure.
Tuesday, September 14, 2010
ThinkPad trackpoint scrolling on Windows
For my model (T61), at least, go here.
Download and install the "ThinkPad UltraNav Driver". Requires a restart (lame), but works like a charm afterward.
Download and install the "ThinkPad UltraNav Driver". Requires a restart (lame), but works like a charm afterward.
Tuesday, August 10, 2010
Unique requires Sort (in PowerShell)
Reading the documentation would have told me this, but who has time for that these days?
Apparently "unique" isn't guaranteed to work without "sort", so any time you'd do this:
do this
Apparently "unique" isn't guaranteed to work without "sort", so any time you'd do this:
$list | unique
do this
$list | sort | unique
Saturday, May 22, 2010
Focus-follows-mouse in Windows (AutoHotKey)
This AutoHotKey code gives you focus-follows-mouse behavior: whatever window the mouse is over has "focus" (keys pressed are sent here). This lets you select a window faster than alt-tabbing though all the options. This script does not raise the window, so you can type in part of a window that is still partially covered (this comes in handy more often than you might think).
Thanks to sooyke, who posted it here.
Thanks to sooyke, who posted it here.
~ScrollLock::
SLStatus := GetKeyState("ScrollLock", "T")
SPI_SETACTIVEWINDOWTRACKING = 0x1001
SPIF_UPDATEINIFILE = 1
SPIF_SENDCHANGE = 2
DllCall("SystemParametersInfo",UInt,SPI_SETACTIVEWINDOWTRACKING,UInt,0,UInt,SLStatus,UInt,SPIF_UPDATEINIFILE | SPIF_SENDCHANGE)
return
There and back again: last week's brief return to linux
I put Ubuntu back on my machine recently, because the ATI drivers for my card under Win7 weren't working with my shaders. It was kind of fun, and I do remember liking the control it gave me over customization. However, to my surprise, I noticed several things:
1) Lots of fit and finish bugs (options missing or not being saved, UI elements in the wrong place, stuff like that). And these were all within the first few hours of using the product!
2) At least one completely unexpected hard crash (while scrolling a page in firefox!)
3) I *really* missed the (Shift)+Win+Left/Right/Up window management and Win+P display management from Win7
4) Something in the font smoothing or colors in X gives me a headache! (I can stare at gvim with the desert colorscheme for hours in Win7, but had trouble concentrating after a few minutes in Ubuntu)
I actually remember getting headaches in college when working on code late at night, but I always assumed it was because it was late or I was tired. Now I'm not so sure...
As surprised as I am to say it, I really prefer working in Windows now. As I said, I like the customizability of the linux world, but the quality bar isn't nearly as high as Win7 or OS X. Only the hard-crash prevented me from doing my work, but that's the problem with how the linux world tends to view these problems. There's a sort of broken window theory at play: if it's understood that "minor" bugs can be left in (because "they don't really affect functionality, just work around it! They should use the command line anyway..."), then other more severe bugs will start to creep in.
That said, I'm a stubborn person, and would've scripted my way around it, were it not for the fact that using it literally gave me a headache! Rebooting back to Win7, I was actually comforted by aero theme, taskbar, and other things i have limited control over. Sure, I can't customize them to my hearts content, but I can count on them. I feel at home in Windows now, and if I ever want to really get crazy with customizability, I can look into AutoHotkey.
I'm still curious about why X gave me a headache, though.
1) Lots of fit and finish bugs (options missing or not being saved, UI elements in the wrong place, stuff like that). And these were all within the first few hours of using the product!
2) At least one completely unexpected hard crash (while scrolling a page in firefox!)
3) I *really* missed the (Shift)+Win+Left/Right/Up window management and Win+P display management from Win7
4) Something in the font smoothing or colors in X gives me a headache! (I can stare at gvim with the desert colorscheme for hours in Win7, but had trouble concentrating after a few minutes in Ubuntu)
I actually remember getting headaches in college when working on code late at night, but I always assumed it was because it was late or I was tired. Now I'm not so sure...
As surprised as I am to say it, I really prefer working in Windows now. As I said, I like the customizability of the linux world, but the quality bar isn't nearly as high as Win7 or OS X. Only the hard-crash prevented me from doing my work, but that's the problem with how the linux world tends to view these problems. There's a sort of broken window theory at play: if it's understood that "minor" bugs can be left in (because "they don't really affect functionality, just work around it! They should use the command line anyway..."), then other more severe bugs will start to creep in.
That said, I'm a stubborn person, and would've scripted my way around it, were it not for the fact that using it literally gave me a headache! Rebooting back to Win7, I was actually comforted by aero theme, taskbar, and other things i have limited control over. Sure, I can't customize them to my hearts content, but I can count on them. I feel at home in Windows now, and if I ever want to really get crazy with customizability, I can look into AutoHotkey.
I'm still curious about why X gave me a headache, though.
Friday, May 21, 2010
The "Uncanny valley" of automation
Automation is not the panacea I thought it was.
Implemented incorrectly, an automation system will waste more of your time than it saves. Sure, you won't have to run through the same checklists over and over, but your time will instead be spent tracking down a disproportionately large number of false positives. At first, I thought: "Hey, at least it's code! It's got to be more fun than just using the product, right?" Now I'm not so sure.
In robotics and computer graphics, there's a problem known as the "uncanny valley": when you get closer to modeling realistic human behavior, the results can be unsettling. I propose that there's a similar problem with automation: the closer you get to trying to simulate human behavior, the more problematic the results. (A bit of a stretch, and due to an entirely different set of problems, but bear with me).
Humans are good at dealing with the abstract. Take, for instance, the hard to read text you're supposed to identify when signing up for something (it's called a "CAPTCHA"). Reading these is (usually) easy for a human; we look at the blurry distorted mess and see letters. This is something we are good at. Writing a program that can read those things, on the other hand, is a difficult computer science problem.
Conversely, say you had to create dozens of accounts somewhere (perhaps as part of testing an online service). Completely filling out the name, address, interests, secret question, etc, is boring, repetitive, and prone to mistakes. These sort of tasks are not our forte. However, it's easy to write a script that fills in all of the fields for you. (Especially with a web page, since you can interact directly with the DOM).
This example illustrates the balance between manual and automatic tasks. Humans are good at abstract reasoning and big picture. Computers are good at repetition and precision (note I didn't say accuracy :-P)
Human users have no problem with slight changes in design or layout (and sometimes they won't even notice); computers tend to go apeshit. This is what causes so many of the automation failures you'll be tracking down. Maybe the browser started minimized, or another window popped up in front of it and stole focus (automatic updates, anyone?). Maybe the page took a few seconds longer to load, and the expected items weren't there when the script checked for them. Maybe the designer moved a button or changed a label. Or, maybe your test itself was wrong! There may be a legitimate outcome that you, author of the test, didn't think of. (When tests are code, they can have bugs too!)
Where computers shine are precisely defined problems. This suits them well for testing behind the scenes, where the input and output is not so abstract. If I send this packet, does the server give back that response? If I call this function with this value, do I get back that answer? What makes this sort of testing and verification so boring (and difficult) for humans is precisely what makes it so perfect for computers.
Another area where automation shines is tools. Running a series of installers and patches...opening up dozens of web browsers to a series of long, convoluted URLs...finding all the logs from a certain period of time, spread over dozens of computers, compressing them, copying them to a central repository, and emailing all interested parties about their availability...these are the sort of mindless, error prone tasks that waste tester time and are just begging to be automated.
The take away from this is that automated tests are most useful when they augment human testing, not try to replace it. Automation should simplify the lives of human testers -- by taking over the tasks humans are inherently bad at -- so they can focus on what they do well: finding problems in the user experience.
Implemented incorrectly, an automation system will waste more of your time than it saves. Sure, you won't have to run through the same checklists over and over, but your time will instead be spent tracking down a disproportionately large number of false positives. At first, I thought: "Hey, at least it's code! It's got to be more fun than just using the product, right?" Now I'm not so sure.
In robotics and computer graphics, there's a problem known as the "uncanny valley": when you get closer to modeling realistic human behavior, the results can be unsettling. I propose that there's a similar problem with automation: the closer you get to trying to simulate human behavior, the more problematic the results. (A bit of a stretch, and due to an entirely different set of problems, but bear with me).
Humans are good at dealing with the abstract. Take, for instance, the hard to read text you're supposed to identify when signing up for something (it's called a "CAPTCHA"). Reading these is (usually) easy for a human; we look at the blurry distorted mess and see letters. This is something we are good at. Writing a program that can read those things, on the other hand, is a difficult computer science problem.
Conversely, say you had to create dozens of accounts somewhere (perhaps as part of testing an online service). Completely filling out the name, address, interests, secret question, etc, is boring, repetitive, and prone to mistakes. These sort of tasks are not our forte. However, it's easy to write a script that fills in all of the fields for you. (Especially with a web page, since you can interact directly with the DOM).
This example illustrates the balance between manual and automatic tasks. Humans are good at abstract reasoning and big picture. Computers are good at repetition and precision (note I didn't say accuracy :-P)
Human users have no problem with slight changes in design or layout (and sometimes they won't even notice); computers tend to go apeshit. This is what causes so many of the automation failures you'll be tracking down. Maybe the browser started minimized, or another window popped up in front of it and stole focus (automatic updates, anyone?). Maybe the page took a few seconds longer to load, and the expected items weren't there when the script checked for them. Maybe the designer moved a button or changed a label. Or, maybe your test itself was wrong! There may be a legitimate outcome that you, author of the test, didn't think of. (When tests are code, they can have bugs too!)
Where computers shine are precisely defined problems. This suits them well for testing behind the scenes, where the input and output is not so abstract. If I send this packet, does the server give back that response? If I call this function with this value, do I get back that answer? What makes this sort of testing and verification so boring (and difficult) for humans is precisely what makes it so perfect for computers.
Another area where automation shines is tools. Running a series of installers and patches...opening up dozens of web browsers to a series of long, convoluted URLs...finding all the logs from a certain period of time, spread over dozens of computers, compressing them, copying them to a central repository, and emailing all interested parties about their availability...these are the sort of mindless, error prone tasks that waste tester time and are just begging to be automated.
The take away from this is that automated tests are most useful when they augment human testing, not try to replace it. Automation should simplify the lives of human testers -- by taking over the tasks humans are inherently bad at -- so they can focus on what they do well: finding problems in the user experience.
Sunday, May 16, 2010
Compile Cg shaders from command line
Use
Cheat sheet:
Vertex shader:
Fragment shader:
cgc
to compile Cg shader programs from the command line. (cgc --help
for usage). Useful for seeing errors/warnings and instruction counts.Cheat sheet:
Vertex shader:
cgc vert.cg -entry main -profile arbvp1
Fragment shader:
cgc frag.cg -entry main -profile arbfp1
Tuesday, April 27, 2010
Venting: automation frameworks
Automation can be your best friend, or your worst enemy. It really comes down to how your organization defines "automation".
Right now, I'm feeling like a slave to a large bureaucracy. They'll run my test, but only if I write a bunch of wrapper code and fill out all of their forms in triplicate. They claim they are making my life easier by providing a "framework", but this framework feels more like an obstacle course.
Automation should:
PowerShell is an excellent tool for this:
The system I'm learning at the moment seems to violate all of these rules:
Testers need to keep abreast of changes in the product, and so must their test scripts. Stifling layers of complexity make it hard to keep these scripts up to date, thus negating their usefulness. An automation system should be simple and flexible, and the path from idea to script should be as quick and painless as possible. Some basic common infrastructure may be necessary, but it should strive to stay out of the way as much as possible.
Right now, I'm feeling like a slave to a large bureaucracy. They'll run my test, but only if I write a bunch of wrapper code and fill out all of their forms in triplicate. They claim they are making my life easier by providing a "framework", but this framework feels more like an obstacle course.
Automation should:
- Be easy to create
- Be easy to run
- Give prompt feedback
- Make your life easier!
PowerShell is an excellent tool for this:
- Scripts can be written in any text editor, and a decent one is included with recent versions of Windows (PowerShell ISE)
- Scripts can be triggered locally or remotely from the command line, or scheduled to run at a later time, on any recent version of Windows. No compilation is required, so no special tools (like Visual Studio) are needed.
- PowerShell can easily parse XML (useful for loading configurations or saving results), interact with COM (which lets you control IE at the DOM level, among other things), and leverage .NET (which lets you do pretty much anything)
The system I'm learning at the moment seems to violate all of these rules:
- Entries for new tests must be created in a Web UI. These entries must reference separate metadata, which is created in with an internal tool. This metadata in turn references a dll containing the new test function(s). These new test functions must be added to a Visual Studio project of related tests, which will build the dll. (In my current case, these test functions are just wrappers that reference *another* dll...) Untangling this web is daunting to a new-comer, and likely still frustrating to those accustomed to it.
- A functioning local repository (whose setup is non-trivial) is required to interact with the service that runs automated tests. All test code editing must be done in this repository.
- The service that runs automated tests performs extensive setup (possibly including re-imaging the OS) on arbitrarily selected target machine(s) from a pool (after waiting for machine(s) to become available). Waiting 3 hours for official results is par for the course. Tests can be run against local machines, but tests that work here will not necessarily work during official runs.
- Adding a new test to the automation system is a non-trivial and frustrating process, which discourages use of said system.
Testers need to keep abreast of changes in the product, and so must their test scripts. Stifling layers of complexity make it hard to keep these scripts up to date, thus negating their usefulness. An automation system should be simple and flexible, and the path from idea to script should be as quick and painless as possible. Some basic common infrastructure may be necessary, but it should strive to stay out of the way as much as possible.
Saturday, April 10, 2010
Rick Barraza Silverlight blog link
This looks like an interesting source of Silverlight visual effects. Check back here later: http://cynergysystems.com/blogs/page/rickbarraza
Monday, March 22, 2010
Authentication for Coded VSTS Web Tests
To set credentials for a Visual Studio WebTest programatically, simply add the following to the class that extends WebTest:
this.UserName = "domain\\user"
this.Password = "pass"
Sunday, March 21, 2010
Easily run commands on many machines with powershell
Need to run a series of commands on a bunch of machines in a row? Try something like this in PowerShell:
01,02,03,04,05,08,09,10,19,20,21 |
%{"base10name12{0:D2}" -f $_} | %{
Enable-WSManCredSSP client $_
invoke-command -comp $_ { cp D:\logs\* \\server\share\logs }
restart-computer $_
}
Deleting SharePoint lists with PowerShell
This web-scrapes the "All Site Content" page of a SharePoint site, picking out the Document Libraries and Lists that are not part of the default setup. It then deletes them (actually sends them to the recycle bin).
Emptying the recycle bin is left as an exercise to the reader (or me, when I have more time)
$cred = New-Object System.Management.Automation.PSCredential "domain\user",("pass" | ConvertTo-SecureString -AsPlainText -Force)
$client = new-object system.net.webclient
$client.Credentials = $cred
$page = $client.DownloadString("http://server/_layouts/viewlsts.aspx")
$matches = [regex]::Matches($page, "AllItems.aspx`">([^<]+?)<")
$toDel = $Matches | %{$_.groups[1].value} | ?{
$_ -ne "Customized Reports" -and
$_ -ne "Shared Documents" -and
$_ -ne "Site Assets" -and
$_ -ne "Style Library" -and
$_ -ne "Announcements" -and
$_ -ne "Links" -and
$_ -ne "Tasks" -and
$_ -ne "Team Discussion"}
$toDel | %{
$req = [System.Net.HttpWebRequest]::Create("http://server/$_")
#$req.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials
$req.Credentials = $cred
$req.Method = "DELETE"
$res = $req.GetResponse()
sleep 5 # give it a chance to catch its breath. otherwise, you may have to run the script several times
}
Emptying the recycle bin is left as an exercise to the reader (or me, when I have more time)
Saturday, March 20, 2010
If SharePoint Service Instances are "Provisioning" forever...
Note to self: Make sure the SPTimer service is running on all machines in your SharePoint farm. If things seem "paused" (ie: service instances are "provisioning" forever), this may be the culprit.
Saturday, March 6, 2010
Faster get-content with ReadCount
By default, get-content reads one line at a time, presumably so you can see file content before PowerShell has finished reading the file. When you want to process large files (ie: logs) and don't want to see the raw contents, use the ReadCount parameter to speed things up. To read x lines at a time, set -ReadCount x. To read the whole file at once, set -ReadCount 0.
Note that this returns an array instead of a single line. The easiest way around this is to feed the resulting array down the pipeline one line at a time (still faster):
More detail here.
cat -ReadCount 0 $file
Note that this returns an array instead of a single line. The easiest way around this is to feed the resulting array down the pipeline one line at a time (still faster):
cat -ReadCount 0 $file | %{$_}
More detail here.
Control lala website from powershell
I recently discovered the lala web music service. I like the idea of having a music collection in the cloud, but I also like having shortcut keys to play/pause, skip, etc, while I'm in another app. PowerShell can control Internet Explorer, so I can do this by calling the following script from AutoHotKey (or the like):
This version expects a lala window to already exist and have music queued up. Future versions will handle this more robustly.
# usage: lala.ps1 -PlayPause -Status
param([switch]$PlayPause, [switch]$Previous, [switch]$Next, [switch]$Show, [switch]$Hide, [switch]$Status)
$app = New-Object -ComObject shell.application
$ie = $app.Windows() | ?{ $_.LocationURL -match "www.lala.com" }
$doc = $ie.Document
if ($Show) { $ie.Visible = $true }
if ($Hide) { $ie.Visible = $false }
if ($Previous) { $doc.getElementById("headerPrevButton").Click() }
if ($PlayPause) { $doc.getElementById("headerPauseButton").Click() }
if ($Next) { $doc.getElementById("headerNextButton").Click() }
if ($Status) {
$doc.getElementById("headerTrackTitle").InnerText
$doc.getElementById("headerTrackArtist").InnerText
}
This version expects a lala window to already exist and have music queued up. Future versions will handle this more robustly.
Tuesday, February 16, 2010
Changing a VSTS Agent's Controller
The controller pointed to by a Visual Studio Team Test Agent can be changed with AgentConfigUtil.exe. ie (from powershell):
You should then restart the Visual Studio Team Test Agent service:
. ${env:ProgramFiles(x86)}\*Test*Load*Agent\LoadTest\AgentConfigUtil.exe /controller:(machine)
You should then restart the Visual Studio Team Test Agent service:
net stop “Visual Studio Team Test Agent”
net start “Visual Studio Team Test Agent”
Tuesday, February 9, 2010
Special characters in PowerShell paths (ie: * [ ] )
If you want to ls a path that includes characters such as *, [, or ], you should use the "-LiteralPath" switch.
Tuesday, January 26, 2010
Registering a new address with AT&T DSL
AT&T wants to guide you through one of its pages, but it doesn't seem to like any browsers that people actually have installed. Go here instead: 144.160.11.35/register
When you're done, it still may not work. Unplug your modem for 30 seconds, plug it back in, and then try again.
When you're done, it still may not work. Unplug your modem for 30 seconds, plug it back in, and then try again.
Saturday, January 16, 2010
Rudimentary PowerShell wrapper (CMD.exe replacement)
PowerShell rocks, but CMD.exe sucks is too limited for my tastes. I'm going to try making a wrapper/CMD.exe replacement for PowerShell. And what better language to write it in than PowerShell script itself? I'm calling it "IggyPosh" for now (POwer SHell).
Here's an early draft. No prompt or command history, but you can type commands and they are executed, so it's a start.
Here's an early draft. No prompt or command history, but you can type commands and they are executed, so it's a start.
function test($str) {
$str = $rs.CreatePipeline($str).Invoke() | Out-String
$history.Text += $str -replace "\s+$([Environment]::NewLine)",[Environment]::NewLine
$history.SelectionStart = $history.Text.Length
$history.ScrollToCaret()
}
# Prepare the window
[Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
$form = New-Object Windows.Forms.Form
$form.Text = "IggyPosh"
$form.Size = New-Object Drawing.Point 600,400
$history = New-Object Windows.Forms.TextBox
$history.Multiline = $true
$history.ReadOnly = $true
$history.ScrollBars = [System.Windows.Forms.ScrollBars]::Vertical
$history.Dock = [System.Windows.Forms.DockStyle]::Fill
$history.BackColor = [System.Drawing.Color]::FromArgb(0,64,0)
$history.ForeColor = [System.Drawing.Color]::FromArgb(224,224,224)
$history.Font = New-Object System.Drawing.Font("Consolas", 9)
$form.Controls.Add($history)
$command = New-Object Windows.Forms.TextBox
$command.Dock = [System.Windows.Forms.DockStyle]::Bottom
$command.add_KeyDown({
if ($_.KeyCode -eq "Enter") { test($command.Text); $command.Text="" }
})
$form.Controls.Add($command)
# Try to init a powershell instance we can talk to
$rs = [System.Management.Automation.Runspaces.RunspaceFactory]::CreateRunspace()
$rs.Open()
$form.ShowDialog()
Wednesday, January 6, 2010
Fix: Installers fail because of "pending reboot" (windows)
If an installer refuses to run because of a "pending reboot" to your system, it may be because of files listed in the "PendingFileRenameOperations" key in the registry:
Open that key in Regedit and check the entries. If they look harmless (especially if they are in some sort of Temp folder), remove them and try again.
More information here.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations
Open that key in Regedit and check the entries. If they look harmless (especially if they are in some sort of Temp folder), remove them and try again.
More information here.
Tuesday, January 5, 2010
Create directories and upload files to Sharepoint from PowerShell
Create a new folder on Sharepoint server hosted at http://server/:
Add a file to that folder:
$url = "http://server/Shared%20Documents/newfolder/"
$req = [System.Net.HttpWebRequest]::Create($url)
$req.Credentials = [System.Net.CredentialCache]::DefaultCredentials
$req.Method = "MKCOL"
$res = $req.GetResponse()
Add a file to that folder:
$dest = $url+"newfile.txt"
$src = "C:\myfile.txt"
$wc = New-Object System.Net.WebClient
$wc.Credentials = [System.Net.CredentialCache]::DefaultCredentials
$wc.uploadfile("$dest", "PUT", $src)
Sunday, January 3, 2010
IEEE float special values (infinity, NaN)
I couldn't find anything in the Cg documentation on how to specify infinity or NaN, but these seem to work:
Infinity: 0x7f800000
-Infinity: 0xff800000
NaN: 0x7fc00000
Infinity: 0x7f800000
-Infinity: 0xff800000
NaN: 0x7fc00000
Saturday, January 2, 2010
Solved: "Why won't my quad show up?!?"
Order of vertices is important. This will show up if put in front of the camera:
This, on the other hand, won't (it'll show up if you move the camera past it and turn around):
Guess which one I was just trying for 30+ minutes. :-/
This was due to my using backface culling. Durr. ><
glVertex3f( size, size, z);
glVertex3f(-size, size, z);
glVertex3f(-size, -size, z);
glVertex3f( size, -size, z);
This, on the other hand, won't (it'll show up if you move the camera past it and turn around):
glVertex3f(-size, size, z);
glVertex3f( size, size, z);
glVertex3f( size, -size, z);
glVertex3f(-size, -size, z);
Guess which one I was just trying for 30+ minutes. :-/
This was due to my using backface culling. Durr. ><
Subscribe to:
Posts (Atom)