How can I get more people to fill out my survey?
Make it compellingQuickly and clearly make these points:Who you are and why you are doing thisHow long it takesWhats in it for me -- why should someone help you by completing the surveyExample: "Please spend 3 minutes helping me make it easier to learn Mathematics. Answer 8 short questions for my eternal gratitude and (optional) credit on my research findings. Thank you SO MUCH for helping."Make it convenientKeep it shortShow up at the right place and time -- when people have the time and inclination to help. For example, when students are planning their schedules. Reward participationOffer gift cards, eBooks, study tips, or some other incentive for helping.Test and refineTest out different offers and even different question wording and ordering to learn which has the best response rate, then send more invitations to the offer with the highest response rate.Reward referralsIf offering a reward, increase it for referrals. Include a custom invite link that tracks referrals.
How can I find out about buildings to be demolished near me?
Go to your City Council, they usually post them. If not, open an enquiry there about recenlty granted permissions to demolish buildings. Or watch local newspapers for the adds wanting personnel related to demolitions. Keep track of old, sealed buildings around you, visit them from time to time. In some countries official signs are fixed beside the works, indicating timetables for debris removal, heavy machinery traffic, permission wor the works to be done, etc…
If an electric car is out of juice on the road, how do you fill it up like you do for a gas car?
The best answer is to buy / lease an Extended Range EV (EREV) like the Chevy Volt or the Ford Fusion Energi. These are EVs with a gasoline generator for backup. I leased a 2012 Volt for 3 years and now I'm leasing a 2015 Volt for 3 more years. I plug it up in my garage every night when I get home and the battery is full every morning. I can drive 40 (sometimes 50!) miles on that $1.30 of electricity then it will automatically start the generator to keep the battery charged to about 20% while you drive around like normal. You can drive 340 miles or so on 9 gallons of gas, pull into a gas station, get more gas, then drive across the country if you want like a normal car. The game changer is being able to plug in and drive on grid power (or directly on sunlight if you have solar panels!) instead of gasoline. So now instead of buying gas every few days or weeks (if you have a hybrid) you can now go several months between fill ups! I've only filled my tank twice since Christmas and that was only because I drove it 1200 miles round trip for Christmas and drove down to Miami in March to see the Formula E race. Since I drive less than 40 miles a day I can do 99% of my driving on the battery without using a drop of gas (even if I go 100 MPH, the generator will not turn on until the battery is drained.) But having the gas range extender means I can go anywhere a gas car can without worrying at all about running out of juice. This is probably our best bet until we can have 200 - 300 mile range out of batteries alone as well as the Tesla Supercharger like chargers for all cars when we go on road trips. Otherwise we just plug in when we get home and use gas if we run out of juice or on long trips.
What is the resolution of the human eye in megapixels?
It wouldn't directly match a real-world camera... but read on.On most digital cameras, you have orthogonal pixels: they're in the same distribution across the sensor (in fact, a nearly perfect grid), and there's a filter (usually the "Bayer" filter, named after Bryce Bayer, the scientist who came up with the usual color array) that delivers red, green, and blue pixels.So, for the eye, imagine a sensor with a huge number of pixels, about 130 million. There's a higher density of pixels in the center of the sensor, and only about 6 million of those sensors are filtered to enable color sensitivity. Somewhat surprisingly, only about 100,000 sense for blue! Oh, and by-the-way, this sensor isn't made flat, but in fact, semi-spherical, so that a very simple lens can be used without distortions -- real camera lenses have to project onto a flat surface, which is less natural given the spherical nature of a simple lens (in fact, better lenses usually contain a few aspherical elements).This is about 22mm diagonal on the average, just a bit larger than a micro four-thirds sensor... but the spherical nature means the surface area is around 1100mm^2, a bit larger than a full-frame 35mm camera sensor. The highest pixel resolution on a 35mm sensor is on the Canon 5Ds, which stuffs 50.6Mpixels into about 860mm^2.So that's the hardware. But that's not the limiting factor on effective resolution. The eye seems to see "continuously", but it's cyclical, there's kind of a frame rate that's really fast... but that's not the important one. The eye is in constant motion from ocular microtremors that occur at around 70-110Hz. Your brain is constantly integrating the output of your eye as it's moving around into the image you actually perceive, and the result is that, unless something's moving too fast, you get an effective resolution boost from 130Mpixels to something more like 520Mpixels, as the image is constructed from multiple samples.Except you don't. For one, your luminance-only rod cells, being sensitive in low light, actually saturate in bright light. So in full daylight or bright room light, they're completely switched off. That leaves you 6 million or so cone cells alone as your only visual function. With microtremors, you may have about 24 million inputs at best… not exactly the same as 24 megapixels. And per eye, of course, so call it 48 megapixels if you want to draw that equivalence.In the dark, the cones don't detect much, it's all rods at that point. Technically that's more “pixels,” but your eye and brain are dealing with a low photon flux density — the same thing that causes ugly “shot noise” in low light photographs. So you brain is only getting input from rods that actually detect something.And all of the 130 million sensors are “wired” down to about 1.2 million axions of the ganglion cells that wire the eye to the brain. There is already processing and crunching on your visual data before it gets to the brain,Which makes perfect sense -- our brains can do this kind of problem as a parallel processor with performance comparable to the fastest supercomputers we have today. When we perceive an image, there's this low-level image processing, plus specialized processes that work on higher level abstractions. For example, we humans are really good at recognizing horizontal and vertical lines, while our friendly frog neighbors have specialized processing in their relatively simple brains looking for a small object flying across the visual field -- that fly he just ate. We also do constant pattern matching of what we see back to our memories of things. So we don't just see an object, we instantly recognize an object and call up a whole library of information on that thing we just saw.Another interesting aspect of our in-brain image processing is that we don't demand any particular resolution. As our eyes age and we can't see as well, our effective resolution drops, and yet, we adapt. In a relatively short term, we adapt to what the eye can actually see... and you can experience this at home. If you're old enough to have spent lots of time in front of Standard Definition television, you have already experienced this. Your brain adapted to the fairly terrible quality of NTSC television (or the slightly less terrible but still bad quality of PAL television), and then perhaps jumped to VHS, which was even worse than what you could get via broadcast. When digital started, between VideoCD and early DVRs like the TiVo, the quality was really terrible... but if you watched lots of it, you stopped noticing the quality over time if you didn't dwell on it. An HDTV viewer of today, going back to those old media, will be really disappointed... and mostly because their brain moved on to the better video experience and dropped those bad-TV adaptations over time.Back to the multi-sampled image for a second... cameras do this. In low light, many cameras today have the ability to average several different photos on the fly, which boosts the signal and cuts down on noise... your brain does this, too, in the dark. And we're even doing the "microtremor" thing in cameras. The recent Olympus OM-D E-M5 Mark II has a "hires" mode that takes 8 shots with 1/2 pixel adjustment, to deliver what's essentially two 16Mpixel images in full RGB (because full pixel steps ensure every pixel is sampled at R, G, B, G), one offset by 1/2 pixel from the other. Interpolating these interstitial images as a normal pixel grid delivers 64Mpixel, but the effective resolution is more like 40Mpixel... still a big jump up from 16Mpixels. Hasselblad showed a similar thing in 2013 that delivered a 200Mpixel capture, and Pentax is also releasing a camera with something like this built-in.We're doing simple versions of the higher-level brain functions, too, in our cameras. All kinds of current-model cameras can do face recognition and tracking, follow-focus, etc. They're nowhere near as good at it as our eye/brain combination, but they do ok for such weak hardware.They're only few hundred million years late...