OneDrive and 0x80270113 error

Error 0x80270113 is yet another mysterious Windows error that Microsoft doesn’t explain well. First, some background:

OneDrive for Windows 8.1 supports online-only files. It’s a great idea: these files look normal, but they take up almost no space. That’s because they actually only exist in Microsoft’s cloud and are not on your hard drive.

If you right-click on an online-only file and select Properties, you’ll see that the Size on disk field is just a few bytes. It’s only a stub file. If you open the stub file, OneDrive transparently downloads the full copy.

In theory, online-only files are a great way to share files across multiple devices that may not have enough drive space to copy all of them locally. Strangely, Microsoft is ditching online-only files for Windows 10.

Error 0x80270113 happens when Windows doesn’t know how to open a OneDrive stub file. Here’s how it happened to me:

I am quitting OneDrive because it’s slow. I have several gigabytes of files in OneDrive. I need to move them out of OneDrive.

Normally, moving files is a fast cut and paste operation. Normally, the file itself isn’t moved. Rather, a few bytes of filesystem data is tweaked to tell Windows that the file is in a new folder.

This is like adjusting highway signs to tell people a new route to a city. The city is still in the same place; all that changed was the route you take to get to the city.

Moving files out of OneDrive is like moving the whole city! Even if you aren’t using online-only files, moving files in or out of your computer’s OneDrive folder is a pokey copy-and-delete operation. With this, a copy of the file is made in the destination location, then the file is deleted from its source location. This is exponentially slower than a move.

As I have many gigabytes of files to move, I got impatient on the long wait. I stopped the move, shut down OneDrive, and used PowerShell’s Move-Item command to do a classic file move operation.

Oops! Move-Item isn’t aware of OneDrive, so it happily moved the file stubs without downloading them first. Only when I tried to open the OneDrive stub files when they were outside of the OneDrive directory did I get the 0x80270113 error! The error probably means is that you have a stub file outside of its OneDrive directory, and Windows doesn’t know how to deal with it.

To make things worse, after I moved all these files out of OneDrive, the OneDrive agent synchronized my now empty OneDrive folder, which caused all the online copies of the files to be deleted. (That is actually correct behavior: if you get rid of a file locally, it should also be removed from the online drive.) This means I was left with only stub files on my hard drive and an empty OneDrive. Is my data gone?

Luckily, OneDrive has an online Recycle Bin. I restored everything from the online Recycle Bin back into OneDrive. My local OneDrive agent then set up online-only stubs of all these files. Now I can use the Windows Explorer’s cut and paste feature to move these files out of OneDrive. I’m pasting them in the same location where I moved the files using Move-Item. With this operation, I am telling Windows Explorer to overwrite the stub files in the destination. This overwrites the tiny stubs with actual data.

At this point, you may ask, “Why did you move your files using Move-Item if you had set them to be online-only?” Answer: I never set any files to be online-only on this PC! I don’t know why that happened. All I can guess is one of:

  • OneDrive does this intentionally for some files.
  • OneDrive bug.
  • I had the OneDrive client running on two other PCs, and on both those other PCs, I set them to use online-only file copies. Perhaps OneDrive somehow carried that setting over to my main computer?

Regardless of why, this is a pain to deal with. I’m very fortunate that OneDrive’s Recycle Bin actually works!

OneDrive is throttled and slow

OneDrive has a low speed cap for new files. Uploading new files is slow.

To test, I uploaded several GB of data with Google Drive and OneDrive. I used NetBalancer to monitor upload speeds. Over 10 minutes, I averaged these upload speeds:

  • Google Drive (googledrive.exe): 2.3 MB/s
  • OneDrive (skydrive.exe): 0.2 MB/s

That’s right, OneDrive’s upload speed is about one tenth of Google Drive’s! This test was done over an 802.11n wifi connection to an unthrottled corporate network that has at least a 1.5 Gb/s upload speed to the internet. Yes, there was upload activity the entire time, although OneDrive paused uploads between files or batches of files.

Others experience slow uploads.

Also, moving files into your OneDrive folder is slow. Instead of a move, it does a copy-and-delete operation. This is painful on spinning media, especially with a lot of files.

OneDrive isn’t good. It’s slow.

Postgres’s pg_upgrade on Windows: the documentation misses a lot

Postgres is the gift of timesuck. It’s a great database, especially for spatial data. However, without insider knowledge, simple tasks eat time.

Latest example: pg_upgrade. It shrinks upgrade complexity–if it works! For Windows users, the documentation has useless and incomplete steps.

Here’s a better procedure. I used it for upgrades on Windows. It assumes you only have the old version installed and you’re using the default x64 install location of C:\Program Files\PostgreSQL. The steps:

  1. Install the newer Postgres with the Windows installer. Don’t let it use TCP port 5432 yet; that’s used by your current Postgres instance. The installer should see this and recommended 5433. Add needed extensions with the Application Stack Builder. If you use PostGIS, don’t install the sample database. Do nothing else with the new instance.
  2. With the Services control panel, shut down both the old and new databases.
  3. Create a new account on your PC named postgres. This is a Windows user, not a user inside the Postgres database. This account does not need to have the same password as the Postgres account in your databases. Add it to your PC’s Administrators group. (I didn’t have this account. I don’t know why Postgres or pg_upgrade need it. A better design would permit me to specify database accounts for each install with pg_upgrade command line switches. The --username switch didn’t appear to do that, plus it would use the same username across both databases, which may not always be proper.)
  4. With Windows Explorer, give the Windows postgres account Full Control permission on both 1. C:\Program Files\PostgreSQL\ and 2. both instances’ data directories, which are at C:\Program Files\PostgreSQL\version\data. Yes, you must give this permission to both data directories. For some reason, the data directories do not inherit permissions. (Make sure the permissions are standard, in the sense that they are inherited by children. This should happen if you use the simple permissions dialog.)
  5. (This change puts your database in an insecure state. I strongly recommend you revert these changes before you do anything that causes the database services to turn on.) Create new pg_hba.conf files for both instances. These files are in C:\Program Files\PostgreSQL\version\data. It takes two steps:
    1. Back up the current files by renaming them to pg_hba.conf.bak. You’ll revert them when done.
    2. Create new pg_hba.conf files in each instance’s data directory. The new files only have these two lines:
      host all all 127.0.0.1/32 trust
      host all all ::1/128 trust
  6. Open a command prompt window in administrator mode, then:
    1. RUNAS /USER:postgres "CMD.EXE"
    2. Change to the bin directory of the newer install of Postgres. It will be C:\Program Files\PostgreSQL\version\bin.
    3. Run this: pg_upgrade.exe --old-datadir "C:/Program Files/PostgreSQL/oldVersion/data" --new-datadir "C:/Program Files/PostgreSQL/newVersion/data" --old-bindir "C:/Program Files/PostgreSQL/oldVersion/bin" --new-bindir "C:/Program Files/PostgreSQL/newVersion/bin" (change oldVersion and newVersion to reflect your actual directories for the old and new versions of Postgres.) This will take a while to run if you have a lot of data. Wait until this is done before continuing.
  7. In the postgresql.conf file for the new Postgres instance, change the listening port to 5432 (from 5433).
  8. Revert the security-reducing changes to the pg_hba.conf files for both servers:
    1. Delete the current pg_hba.conf files.
    2. Rename the pg_hba.conf.bak files back to pg_hba.conf.
  9. Through the Services control panel:
    1. Start your new Postgres instance.
    2. Reconfigure both Postgres services as needed. It is possible that the service for your old Postgres instance has Startup type set to Automatic. It probably should be set to Manual or Disabled. This makes it harder for both the old and new instances to run at the same time.
  10. Vacuum and reanalyze all databases.
  11. If you created a postgres Windows account, remove it.

The new Postgres instance’s postgres database account (database account, not the Windows account you already deleted) will have the same password as the old Postgres instance.

Once you’ve verified that everything works properly, you might consider uninstalling the old Postgres copy.

QUESTION: Does pg_upgrade.exe cause the stopped Postgres instances to start? If not, then some of the above steps may be unnecessary. In short, the old instance would be shut down as step 1, the new instance is installed also using port 5432, and the edits to pg_hba.conf are unnecessary. Let me know if you want to try this!

Do projects matter for IT?

(Originally posted on Eric Brown’s Technology, Strategy, People & Projects blog on June 7, 2011, with some edits. Still highly relevant at the end of 2014!)

Short answer: not as much as many believe.

Information technology (IT) focus is shifting from classical projects to agile services. Here’s why.

Reason 1: Much of IT defies project definition

A classical project has a predetermined start, end, work breakdown, and result. When done, the result goes into “maintenance mode”, and you jump to the next project.

But what if something never has a “maintenance mode”? What if a work breakdown is impossible to know?

For example, the web is never done. A university’s web site must be exciting and work quite well; the key audiences are technologically progressive prospective and current students. What university wants technophobic students? Relevant university sites must keep up with rapidly evolving consumer technologies.

A university’s web site is a good example of an agile service: an adaptive mix of agile applications and expertise. These are where a lot of IT’s attention is going.

Agile services don’t end. They are not classical projects.

Reason 2: Small projects don’t matter much

Isn’t a service just a lot of mini-projects? And isn’t the latest trend to make projects smaller?

Neither matters much. Small projects are really large tasks or iterations in an agile service.

By themselves, small projects don’t tell the value of IT. Agile services do.

Reason 3: Virtualization and clouding

The largest classical IT projects are implementations. Virtualization and especially clouding make it easier to create new things, sometimes minimizing implementation projects into simple tasks.

Without huge implementations, the focus shifts to maximizing value of existing investments. Again, emphasizing agile services at the cost of classical projects.

Reason 4: Agile is where it’s at

Classical projects use waterfall, a prescriptive method from the manufacturing and construction industries. It’s from a time when the pace was steady, change was resisted, and top down was how it happened.

Relevant IT is the opposite: fast-paced, adaptive, and responsive. That’s why agile management is natural for IT: it encourages adaption, continual reassessment, early problem discovery, and faster completion.

I’m not the only one seeing this. Look at Google search trends for agile project (blue) versus waterfall project (red):

agile vs waterfall

But this isn’t just about improving how projects are done. Agile does something that waterfall can’t: manage services.

Paraphrasing Men In Black II, “Waterfall projects: old and busted. Agile services: new hotness.”

Do classical projects belong in IT?

Classical projects still have a place in relevant IT. We will still have cookie cutter projects with well-understood paths and vanilla outcomes.

However, “well-understood” and “vanilla” and are being outsourced, such as email, web systems, ERP systems, and more. If not outsourced, they may be “keep the lights on” , undifferentiated from plant operations. Or their business value is not intrinsic; the value is in what others—users, innovators, developers—can wring from them.

Agile services are the future of IT. It’s how relevant IT works, it’s how relevant IT provides business value, and it’s how relevant IT communicates what it does.

Heartbleed = overcomplexity + input validation failure

The Heartbleed vulnerability is because the OpenSSL code didn’t validate an input. It’s also because OpenSSL had unnecessary complexity.

OpenSSL has a heartbeat feature that allows clients to send up to 64 kilobytes of arbitrary data along with a field telling the server how much data was sent. The server then sends that same data back to confirm that the TLS/SSL connection is still alive. (Creating a new TLS/SSL connection can take significant effort.)

The problem is if the client specifies that it sent more data than it actually did, the server would send back the original data and some of its RAM. For example, suppose the client sent a 1K message but said it’s 64KB. In response, the server would send a 64KB message back, which was the original 1K message plus 63K of data from the server’s RAM, which could include sensitive, unencrypted data from other programs.

How this could have been prevented:

  1. Avoid pointless complexity: don’t require the client to also send the length of the arbitrary text. The server should have been able to detect the length of the text.
  2. Validate all input. The server failed to ensure that the client’s description of the text length matched its actual length. (The fact that the server could detect the message’s actual length further validates my view on #1.)

Keep it simple! In addition to driving up creation and maintenance costs, needless complexity is more opportunities for things to break.