You are currently browsing the archives for February 2017.
Displaying 1 - 10 of 18 entries.

Lenovo ThinkCentre M93p, Powerful Mini Desktop PC With Intel Haswell RapidCharge & Support VESA

  • Posted on February 28, 2017 at 8:52 pm

Although small and no bigger than a DVD Drive, the latest Lenovo ThinkCentre M93p is claimed to be one of the ideal alternative right choice for those who really want more mobility and portability are high for the existence of his desktop PC.

By relying on the reliability of Intel Core i7 processor until Haswell in it, a desktop PC with a thickness of only 34.5 mm is supposedly able to support reliable power on overall system performance.

In addition to providing a 64GB SSD as primary storage handalnya accurate solution, this mini desktop PC also has been providing storage options Solid State Hybrid Drive (SSHD) as the best alternative storage expansion.

The presence of USB slots “Always On” with RapidCharge feature makes it usable to recharge (charge) power devices even when the computer is turned off gadgets (sleep mode) even if, while you can use the optional VESA to attach to the back of a desktop computer to create such as all-in-one PC.

Concerning the price offered, a unit of mini desktop PC Lenovo ThinkCentre M93p is reportedly priced at 749 USD, equivalent to over 7.45 million dollars.

Apple Develops Technology ‘Voice Recognition’ Alone for Siri

  • Posted on February 28, 2017 at 4:32 am

Apple is reportedly developing a voice recognition technology for Siri. Through a team that was formed recently, Apple seems to want to remove the dependency from Nuance.
As a reminder, Nuance Communications is a multinational software maker based in Boston, USA. The company makes software used by voice recognition Siri. Later revealed that the two researchers, Nuance has now joined Apple.
Reporting from Xconomy, which was formed in Boston team consists of former employees of Nuance Gunnar Evermann with experience in developing voice recognition technology, then Larry Gillick, who served as “Chief Scientlist Speech” in Apple Siri, and Don McAllestar a former employee who is now Nuance served as “Senior Reseach Scientist” at Apple. There are also several other former employees who now joins Nuance with Apple but are not based in Boston, including Caroline Labrecque and Rongqing Huang.

Asure Software to Present Complimentary Live Webinar on Maximizing the Benefits of the Workforce Management Evolution

  • Posted on February 27, 2017 at 12:24 am

What: Workplace-management software-solutions providerAsure Software, Inc. (ASUR) will present “Maximizing the Benefits of the Workforce Management Evolution,” a complimentary live webinar that will address trending, innovative uses for time and attendance data collection that can bring significant impact and competitive positioning to an organization, now and in the future. The session will be presented by Cooper Caywood, Asure Software Vice President of Client Services.

When: Wednesday, Aug. 7, 2013 from 1-2 p.m. EDT.

How: To register for the webinar, visit http://bit.ly/15yYeXE. HR.com members must login to register for the webcast. Non-members can sign up for a free HR.com membership athttp://www.hr.com/en/memberships/ and register for the webinar when their HR.com membership has been confirmed. Webcast participants need a computer with Internet access. Registered participants will receive complete login information 24 hours and two hours prior to the event. They also will receive a copy of the presentation slides and a Real Media file of the presentation that can be downloaded to and played on an iPod or MP3 player.

Why: The measurement of work time and the value of those measurements (data) continue to evolve in striking ways with impactful results. Time and attendance data remains a primary input into the process of workforce compensation. However, as technology, the workforce, and business environments have evolved, the collection processes and value of this time data has undergone seismic shifts as well.

Takeaways: Webinar participants will learn about: technological advances that elevate what used to be deemed as utilitarian tasks into strategic value; accommodating and ever-changing workforce that has become more geographically disperse, mobile and global; and increasing the value of time and labor management data for better decision-making.

About Asure Software
Asure Software, Inc., (ASUR) headquartered in Austin, Texas, offers cloud-based time and labor management and workspace management solutions that enable businesses to control their biggest costs — labor, real estate and technology — and prepare for the workforce of the future in a highly mobile, geographically disparate and technically wired work environment. Asure serves approximately 6,000 clients worldwide and currently offers two main product lines: AsureSpace™workplace management solutions enable organizations to maximize the ROI of their real estate, andAsureForce® time and labor management solutions deliver efficient management of human resource and payroll processes.

Facebook speeds PHP by crafting a PHP virtual machine

  • Posted on February 25, 2017 at 2:33 pm

Social networking giant Facebook has taken another step at making the PHP Web programming language run more quickly. The company has developed a PHP Virtual Machine that it says can execute the language as much as nine times as quickly as running PHP natively on large systems.

“Our goal is to make PHP run really, really quickly,” said Joel Pobar, a Facebook engineering manager. Facebook has been using the virtual machine, called the HipHop Virtual Machine (HHVM), across all of its servers since earlier this year.

Pobar discussed the virtual machine at the O’Reilly Open Source Conference (OSCON) being held this week in Portland, Oregon.

Shares its development tools

HHVM is not Facebook’s first foray into customizing PHP for faster use. PHP is aninterpreted language, meaning that the source code is executed by the processor directly. Generally speaking, programs written in interpreted languages such as PHP tend not to run as quickly as languages, such as C or C++, that have been compiled beforehand into machine language byte code. Facebook has remained loyal to PHP because it is widely understood by many of the Web programmers who work for the company.

To keep up with the insatiable user demand, however, Facebook originally devised a compiler, called HipHop, that would translate PHP code into C++, so it then it could be compiled ahead of time for faster performance.

While Facebook enjoyed considerable performance gains of this first version of HipHop for several years, it sought other ways to speed the delivery of the dynamically created Web pages to its billion or so users. “Our performance strategy for that was going to tap out,” Pobar admitted.

HHVM is the next step for Facebook. Under development for about three years, HHVM actually works on the same principle as the Java Virtual Machine (JVM). HHVM has a just-in-time (JIT) compiler that converts the human readable source code into machine-readable byte code when it is needed. (The previous HipHop, renamed HPHPc, has now been retired within Facebook.)

This JIT approach allows the virtual machine to “make smarter decisions at runtime,” Pobar said. For instance, if a call is made to the MySQL database to read a row of data, the HHVM can, on the fly, figure out what type of data it is, such as an integer or a string. It then can generate or call code on the fly that would be best suited for handling this particular type of data.

With the old HipHop, “the best it can do is analyze the entire Facebook codebase, reason about it and then specialize code based on its reasoning. But it can’t get all of the reasoning right. There are parts of the code base that you can not simply infer about or reason about,” Pobar said.

Virtual system speedier

Pobar estimated that HHVM is about twice as fast as HPHPc was, and about nine times as fast as running straight PHP.

Facebook has posted the code for HHVM on GitHub, with the hopes that others will use it to speed their PHP websites as well.

HHVM is optimized for handling very large, and heavily used, PHP codebases. Pobar reckoned that using HHVM for standard sized websites, such as one hosting a WordPress blog, would gain only about a fivefold performance improvement.

“If you take some PHP and run it in on HipHop, the CPU execution time [may] not be the limiting factor for performance. Chances are [the system is] spending too much time talking to the database or spending too time talking to [the] memcache” caching layer, Pobar said.

Attend Meeting C++ 2013

  • Posted on February 25, 2017 at 5:42 am

Boost Dependency Analyzer

I have something special to announce today. A tool I’ve build over the last 2 weeks, which allows to analyze the dependencies in boost. With boost 1.53 this spring, I had the idea to build this, but not the time, as I was busy writing a series over the Papers for Bristol. Back then I realized, how easy it could be to build such a tool, as the dependencies could be read & listed by boosts bcp tool. I already had a prototype for the graphpart from 2010. But lets have a look at the tool:

The tool is very easy to handle, it is based on the out of bcp, which is a tool coming with boost. Actually bcp can help you with ripping libraries out of boost, so that you don’t have to add all of boost to your repository when you would like to use smartpointers. But bcp also has a listing mode, where it only shows the dependencies thats whats my tool build up upon. Lets have a short look at the results, the dependencies of boost 1.54:

A few words on how to read this graph. The libraries in the middle of the “starshape” are the ones with the most dependencies, each line between the nodes is a dependency. A dependency can be one or multiple files. The graphlayout is not weighted.

How to

A short introduction on what you need to get this tool to run. First boost, as this tool is build to analyze boost. I’ve tested with some versions (1.49 – 1.54) of boost. You also need a version of bcp, which is quite easy to build (b2 tools/bcp). Then you simply need to start the tool, if BOOST_ROOT is set, the tool will try to read it, other wise you will be asked to choose the location of boost when clicking on Read dependencies. Next thing is selecting the location of bcp. That is the setup, and the tool will now run for some time. On my machine its 90 seconds to 2 minutes the analysis takes, it might be lot longer on yours, depending on how much cores you got. The tool will spawn for each boost library (~112) a bcp process, and analyze this output in a thread pool. After this is done, the data is loaded into the tool, and then saved to a SQLITE database, which will be used if you start the tool a second time and select this version of boost. Loading from the database is far faster.

A screenshot to illustrate this:

tl_files/blog/bda/bda.png

To the left are all the boost libraries, the number of dependencies is shown in the braces. To the right is a Tabwidget showing all the dependencies, the graph is layouted with boost graph. When you click on show all you’ll get the full view of all dependencies in boost. The layouting is done in the background, so this will take some time to calculate, and is animated when its done. The results of the layouting are good, but not perfect, so that you might have to move some nodes. Exporting supports images, which are transparent PNGs, not all services/tools are happy with that (f.e. facebook, twitter nor G+ could handle the perfectly fine images), this can be fixed by postprocessing the images and adding a white background.

Inner workings

I’ve already written a little about the tools inside, its build with Qt5.1 and boost. Where boost is mostly used for the graph layouting. As I choose to work with Qt5, it has a few more dependencies, for windows this sums up to a 18 mb download, which you’ll find at the end. The tool depends on 3 libraries from my company Code Node: ProcessingSink, a small wrapper around QProcess, that allows to just start a bunch of processes, and lets you connect to the finished and error slot. This was necessary, as I could only spawn 62 parallel processes under windows, so this library does take care of spawning the parallel processes now. Which are currently 50 at a time. GraphLayout is the code that wraps the innerworkings of boost::graph, its a bit dirty, but lets me easily process the graphlayouting. The 3rd library is NodeGraph, which is the Graph UI, based on Qts GraphicsView Framework.
I plan to release the tool and its libraries under GPL later on github, for now I don’t have the time to polish everything.

Problems

One of the earliest questions I had when thinking about building such a tool, was where to get a list of the boost libraries? This sounds easy. But I need to have this readable by machine, not human, so HTML is a great format, but I refused to write a parser for this list yet. I talked to some people about this at C++Now, and most agreed, that the second option would be best: maintainers.txt. Thats what the tool reads currently to find the boost libraries. Unfortunately at least lexical_cast is missing in this list. So, the tool isn’t perfect yet, while lexical_cast is already patched, I’m not sure if anything else is missing. A candidate could be signals, as its not maintained anymore. Currently the tool analyzes for 1.54 112 libraries.

boost dependencies

Working for 2 weeks on this tool has given me some inside knowledge about the dependencies in boost. First, the way it is shown in the tool, is the view of bcp. Some dependencies will not affect the user, as they are internal. f.e. a lot of libraries have a dependency to boost::test, simply because they provide their tests with it. The bcp tool really gets you ALL the dependencies. Also most (or was it all?) libraries depend on boost::config. I plan to add filtering later, so that the user has the ability to filter some of the libraries in the GraphView.

The tool

Here is how to get the tool for now: there is a download for the binaries for windows and linux. I’ll try to get you a deb package as soon as I have time, but for now its only the binaries for linux, you’ll have to make sure to have Qt5.1 etc. on linux too, as I do not provide them. For Windows, its 2 archives you’ll need to download: the programm itself, and needed dlls for Qt5.1 if you don’t have the SDK installed ( in this case you also could copy them from the bin directory)

Note on linux: this is a one day old beta version. Will update this later.

Site developer “software” Apple hacked

  • Posted on February 20, 2017 at 6:53 pm

Computer and software giant Apple on Monday AM, said its software developer page has been offline after hacked. Apple warns that personal information regarding users may have been stolen.

“Last Thursday, an intruder tried to break into the personal information of our registered developers on our developer page,” Apple said as quoted by AFP.

Despite the sensitive information is encrypted, “we can not rule out the possibility that some of the names, mailing addresses and email addresses belonging to the developer or may have been accessed.”

The information technology giant said in a statement titled “We’ll be right back” that hackers have hacked the reason “for the sake of transparency and accountability.”

Company officials immediately blocked sites is Thursday U.S. time ago and has since worked to fix it.

“To prevent this kind of security threats happening again, we will thoroughly examine our development system, software update our server, and to rebuild kesuluruhan our database,” said Apple, apologize and hope that the developer’s site soon recover.

This site is a portal to the creators of third-party software to design applications for the iPhone, iPad and Mac computers, in addition to being a forum for software developers.

According to Macworld, for days many developers have posted a message pitched frustration and anger on Twitter about the breakdown of the site.

Logitech headsets and webcams for the business professional

  • Posted on February 19, 2017 at 10:07 pm

As many of you know, I’m a full-time telecommuter. Although a portion of my work involves some travel, most days I am working from home, and a lot of that involves sitting on conference calls with colleagues and customers/partners.

Until recently, much of that required that I be desk-bound.

Anyone who has to work with VOIP and IP-based conferencing systems such as Skype, Microsoft Lync, Cisco WebEx and Citrix GoToMeeting knows that voice quality is everything if you’re going to have an effective business conversation.

And that means using devices that typically tie you to your desk, such as a wired headset or an Bluetooth/USB speakerphone, such as the Plantronics Callisto, which I have and think is an excellent product.

While there are many Bluetooth headsets and earpieces on the market which are perfectly suitable for mobile phone conversations, few are specifically optimized for use with PCs that have VOIP “Soft Phone” software, and do not deliver what I would regard as business critical voice quality.

They are perfectly fine for short calls, but not ideal when you are on a VOIP conference for as much as an hour at a time, or even longer, particularly when you need to be an active participant and when paying close attention to who is speaking and the clarity of what you are saying is essential.

As we all know about Bluetooth when it comes to audio streams, the farther you get away from the transceiver, the worse the audio gets. So it’s not practical to stray too far away from your PC.

Logitech’s latest wireless headsets have been a total game changer for my personal work situation since I’ve been using them the last few months. I’ve been using the H820e stereo version which retails for $199 but can be found for considerably less.

Installation and use of the headset is pretty straightforward — you plug the DECT 6.0 transmitter and charging base into a free USB port on your PC or Mac, and the AC power cord to power the base. The headset charges on the base when not in use, and has a built-in rechargeable battery.

The operating system recognizes it automatically, and depending on the VOIP program you are using, you may need to alter the settings to use the headset as your primary audio device.

If you’re familiar with the DECT 6.0 1.9Ghz wireless transmission standard, particularly if you have cordless phones in your house that use the technology, you know that you can get some pretty impressive range and not lose any voice quality. That’s exactly what the H820e headset gives you for VOIP calls.

My home office is a good 60 feet away from my living room and around 75 feet from my “breakfast area” which has my espresso machine and a table which faces my outdoor patio and pool area with outdoor furniture which is about 100 feet or so away from the base transmitter.

So regardless of what VOIP software I am using, and where I am in my house, I get the same crystal-clear voice quality as if I am sitting right in front of my PC. For example, this wearable computing podcast that I recorded with Rick Vanover of Veeam was actually done in my living room, while wearing the H820e using Skype.

So the quality of the audio is without dispute. What about the overall design and using it?

The H820e was designed for use for hours at a time. The stereo version is comfortable and after a while you forget you even have it on your head. While I am extremely pleased with the device, I have only a few nitpicks:

First, the “Mute” button is attached to the microphone boom and is recessed back towards where the headphone is. It doesn’t stick prominently out, so you have to sort of feel your way up the boom to finding it.

If you’re away from your PC and are not near the software controls of your VOIP client, and some sort of unplanned audio distraction occurs that you don’t want to be heard by everyone else, then it could take a few seconds to mute the audio while you fumble around with the boom. It would be better if in the next version of this product that they put it on the exterior side of the headphone holding the boom.

It’s a minor annoyance but it’s still an annoyance nonetheless.

The second is the boom mic’s sensitivity to airflow. Now, normally you don’t have a lot of “wind” in an indoor or office setting but in the summertime in Florida, I like to have a fan going in my office for better air circulation.

If that fan is pointed directly at me, it sounds like I am in an outdoor breeze. And if you are actually outdoors (like sitting on my patio and having a cup of coffee) and a little bit of wind picks up, you’re going to hear it if the mic isn’t muted, no question.

Also, if you are a heavy breather, you’ll probably want to have the boom twisted a lot farther away from your mouth than you think you need it.

Despite what I would call these two minor nitpicks I think the H820e is an excellent product and I heartily reccomend it. I’ve also spent some time with their wired headset, the H650e on business trips with my laptop and also on my Surface RT using Skype and Lync, and the audio is just as high quality as the H820e, provided your bandwidth supports the fidelity of the connection.

Not all telecommuting and conferencing is about audio, however. From time to time I do need to do video as well.

My corporate laptop, my Lenovo X1 Carbon is a great little machine but its webcam isn’t its strong suit. When it’s docked to my monitor on my desk at home, I need something that delivers much more robust and HD-quality video.

I’ve written about small busines and SOHO/workgroup video conferencing products before, like Logitech’s BCC950. While the BCC950 is an excellent product for small meeting rooms and for having three to five people on camera at once, it’s overkill for a telecommuter or just someone in a single office.

Enter the Logitech C930e, a “Business” webcam. Like any other webcam it clips to the top of your monitor and plugs into your USB 2.0 or 3.0 port. But this is no ordinary webcam.

At a street price of $129.00 it’s more expensive than Logitech’s consumer/prosumer webcam offerings, but there’s considerable enterprise-class video conferencing technology built-into this little device.

First, provided your bandwidth supports it, the C930e can capture 1080p video (or 15MP stills) at 30 frames a second because it includes Scalable Video Coding using H.264 and UVC 1.5, the second of which is needed to be certified for use with corporate-grade video conferencing tools.

Second, the camera has a 90-degree diagonal field of view so you get a widescreen capture of the subject without any “fish eye” distortion. You also get a Carl Zeiss lens and 4X digital zoom with software pan and tilt control, as well as built-in stereo microphones

Logitech also offers the consumer-oriented C920 which is about $30 cheaper than the C930e, but it lacks the the Scalable Video Coding and UVC 1.5 capabilities used with corporate applications like Lync and Cisco UC and is more suited towards Skype and other consumer video applications like Google Hangouts. It also lacks the 90-degree FOV of its more expensive sibling.

While the two cameras look very similar, they shouldn’t be confused with each other. If corporate video conferencing capability and quality is definitely what you need, you want the C930e.

Intel Core i7 HEDT Ivy Bridge-E Bakal Dirilis Sebelum Tanggal 11 September 2013 Mendatang

  • Posted on February 17, 2017 at 10:13 am

Popularity CPU Sandy Bridge-E reportedly soon will soon be replaced by the emergence CPU Intel Core-i7 “Ivy Bridge-E” in a recent benchmark that is claimed as one of the most advanced desktop CPUs and the latest HEDT.

According to leaks reported by Japanese news site called Hermitage Akihabara, it has been revealed that Intel will soon release a few official new flagship CPU models as well as between 4 to 11 September 2013 that will come.

Although the newly revealed 8 Core i7-4960X Extreme Edition, Core i7-4930K and Core i7-4820K, but it was likely also will see several other new models later.

Existence HEDT platform itself is one level below the current consumer architecture.

Hacking with a Hacker

  • Posted on February 16, 2017 at 7:44 pm

What is it like to hack with one of the original hackers? It is certainly much different than what Appears to be the modern rendition of hacking. My experience was not getting really drunk with tons of junk food. It was not working on “beautiful” designs or “authentic” typography. It was not so much about sharing with the world as it was sharing with your peers. It had a very different feel to it than the “hacker culture” Promoted by some of the top technical Silicon Valley companies. It felt more “at home”, less dreamy, and more memorable.

I meet with Bill Gosper every so Often; I had the pleasure of giving him a tour of Facebook when I worked there. (He was so surprised that they had Coke in the glass bottles there, just like the old days.)

He is still very much a hacker, a thinker, a tinkerer, and a wonderer. Every time I meet up with him, he has a new puzzle for me, or someone around him, to solve, whether it’s really clever compass constructions, circle packing, block building, Game of Life automata solving, or even something more tangible like a Buttonhole homemade trap (which was affixed to my shirt for no less than two weeks!). He is also the bearer of interesting items, such as a belt buckle he gave me roomates depicts, in aluminum, a particular circle loose packing.
Gosper succeeding in tricking me with the Buttonhole Trap
When we meet up, all we do is hack. Along with him and one of his talented young students, we all work on something. Anything interesting, really. Last time we met up, we resurrected an old Lisp machine and did some software archeology. I brought over some of the manuals I own, like the great Chinual, and he brought over a dusty old 1U rackmount Alpha machine with OpenGenera installed. After passing a combination of Hurdles, such as that the keyboard was not interfacing with the computer Correctly, we finally got it to boot up. Now, I got to see with my own eyes, a time capsule containing a lot of Bill’s work from the 70s, 80s, and 90s, roomates could only be commanded and Examined through Zmacs dired and Symbolics Common Lisp. Our next goal was to get Symbolics Macsyma fired up on the old machine.

There was trouble with starting it up. License issues were one problem, finding and loading all of the files were compiled another. Running applications on a Lisp machine is very different than what we do today on modern machines, Windows or UNIX. There’s no. Exe file to click, or. App bundle to start up, or even a single. / File to execute. Usually it’s a collection of compiled “fast loading” or “fasl” files that get loaded side-by-side with the operating system. The application, in essence, Becomes a part of the OS.

In hacker tradition, we were Able to bypass the license issues by modifying the binary directly in Lisp. Fortunately, such as Lisp makes things easy disassembly. But how do we load the damn thing? Bill frustratingly muttered, “It’s been at least 20 years since I’ve done it. I just do not remember. “I, being an owner of MacIvory Symbolics Lisp machines, fortunately did remember how to load programs. “Bill, how about LOAD SYSTEM Macsyma?” He typed it into the native Lisp “Listener 2” window (we kept “Listener 1” for debugging), sometimes making a few typing mistakes, but finally succeeding, and then we saw the stream of files loading. We all Shouted in joy that progress was being made. I recall Bill was especially astounded at how fast everything was loading. This was on a fast Alpha machine with gobs of memory. It must have been much slower on the old 3600s they used back in the day.
The Lisp Machine Manual, or Chinual
It was all done after a few minutes, and Macsyma was loaded. To me, this was a sort of holy grail. I personally have Macsyma for Windows (which he uses in a VirtualBox machine on his 17 “MacBook), and I’ve definitely used Maxima. But Macsyma is something I’ve never seen. It was something that seems to have disappeared with history, something I have not been Able to find a copy of in the last decade.

Bill said, “let’s see if it works.” And he typed 1 +1; in, and sure enough, the result was 2. He saw I was bubbling with excitement and asked me if I’d like to try anything. “I’d love to,” and he handed the keyboard over to me and I typed in my canonical computer algebra test: integrate (sqrt (tan (x)), x);, roomates computes the indefinite integral
—- √ ∫ tanθ dθ
Out came the four-term typeset result of logarithms and arctangents, plus a fifth term I’d never seen before. “I’ve never seen any computer algebra system add that fifth term,” I said, “but it does not look incorrect.” The fifth term was a floored expression, Whose Increased value with the period of the function preceding it. “Let’s plot it,” Bill said. He plotted it using Macsyma’s menu interface, and it was what appeared to be an increasing, non-periodic function. I think we determined it was properly handled Because Macsyma branch cuts, with other systems have been known to be unorthodox about. It definitely had a pragmatic feel to it.

Now, Bill wanted to show us some interesting things; however all of Bill’s recent work Macsyma was on his laptop. How do we connect this ancient to a modern Macintosh hardware? We needed to get the machine onto the network, and networking with old machines is not my forte.

Fortunately, Stephen Jones, a friend of Bill’s and seemingly an expert at a rare combination of technical tasks, showed up. He Was able to do things that Neither Bill nor I could do on such an old machine. In only a few moments, he Was able to get Bill’s Mac talking to the Alpha, roomates shared a portion of its file system with Genera. “Will there be enough space on the Alpha for Macsyma my files?” Bill asked Stephen. “Of course, there’s ton’s of space.” We finally got Bill’s recent work transferred onto the machine.
Bill hacking in Macsyma in OpenGenera (Image courtesy of Stephen M. Jones)
We spent the rest of the night hacking on math. He Demonstrated to us what it was like to do a real mathematician’s work at the machine. He debuted some of his recent work. He went though a long chain of reasoning, showing us the line-after-line in Macsyma, number theoretic amazing to do things I’ve never seen before.

I did ask Bill why he does not publish more often. His previous publications have been landmarks: his algorithm for hypergeometric series and his summation algorithm for playing the Game of Life at light speed. He RESPONDED, “when there’s something interesting to publish, it’ll be published.” He seemed to have a sort of disdain for “salami science”, where scientific and mathematical papers present the thinnest possible “slice” or result possible.

Bill is certainly a man that thinks in a different way than most of us do. He is still hacking at mathematics, and still as impressive as before. I’m very fortunate to have met him, and I was absolutely delighted to see that even at 70 years old, his mind is still as sharp as can be, and it’s still being used to do interesting, Gosper-like mathematics.

And you would not believe it. We all were ready to head home at around 9 PM.

TSP Symposium 2013 Keynotes to Focus on Quality Practices for Critical Software

  • Posted on February 16, 2017 at 3:25 pm

The Carnegie Mellon University Software Engineering Institute (SEI) has announced the slate of software engineering thought-leaders who will serve as keynote speakers for the Team Software Process (TSP) Symposium 2013. Held in Dallas, Texas, on September 16-19, the TSP Symposium 2013 keynote line-up includes Bill Curtis, senior vice president and chief scientist with Cast Software; Enrique Ibarra, senior vice president of technology of the Mexican Stock Exchange (BMV); and Robert Behler, chief operating officer of the SEI.

The symposium theme, When Software Really Matters, explores the idea that when product quality is critical, high-quality practices are the best way to achieve it.

“When a software system absolutely must work correctly, quality must be built in from the start. A disciplined approach to quality also offers the benefit of lower lifecycle costs. The TSP promotes the application of practices that lead to superior, high-quality products,” said James McHale, TSP Symposium 2013 technical chair. “Our keynote speakers and representatives from industry and government organizations from around the world will share how using TSP helps organizations build quality in from the start when there’s no room for error.”

  • Curtis will assert that the stakes for software-caused operational problems are now larger than ever, approaching a half-billion dollars per incident. Every other aspect of the business is managed by numbers, including IT operations. Software lags behind, however, because the culture of craftsmanship still prevails. Curtis’s talk will challenge that culture: Quality measurement will be challenged for under-measuring non-functional, structural quality, the cause of many operational disasters. Productivity measurement will be challenged for not penalizing baselines when rework is shifted into future releases as technical debt. Software measurement will be challenged to better express outcomes in terms that justify investments for improving quality. The word “quality” will be challenged as the wrong way to frame the argument. Curtis will propose a measurement stack or measurement pyramid to help translate software numbers to business numbers. At the foundation of this pyramid are the Personal Software Process (PSP) and TSP.
  • Ibarra will detail the Mexican Stock Exchange’s (BMV) broad plan of technological renovation that included migration to a new state-of-the-art data center and creating new operational systems with better functionalities and quality attributes. Since 2005, the BMV, which is responsible for operating the cash and derivatives market of the country and is the only exchange in Mexico, has faced the constant challenge of accommodating an exponential growth of demand for its transactional services as well as pressure from the market to offer services with better response times and functionalities. One of the most challenging software projects included in this technological renovation plan was the redesign and construction of the operational system known as the trading engine, which has strict and ambitious requirements for speed (latency), scalability, and continuous availability. The new system, which was to be designed and built internally, and the project were called MoNeT. The BMV had two goals for MoNeT: making sure a carefully considered and reviewed system architecture was in place prior to building the system and adopting a software development process that maximizes the quality of the new system and ensures that it complies with its intended quality attributes. Ibarra will describe the most relevant aspects of the MoNeT project, its performance in production, and the business impact it had on the BMV.
  • Behler, one of only 139 individuals qualified as pilots of the Lockheed SR-71 Blackbird aircraft, will describe his experience flying the fastest, most physically demanding aircraft in the world to gather vital data during the Cold War and the teamwork approach it took to develop the aircraft. The SR-71 was developed in the 1960s with myriad sophisticated sensors used to acquire highly specific intelligence data. The aircraft remains an icon of American aerospace engineering to this day and is considered to be the most effective reconnaissance aircraft in history.

In addition to the keynote speakers, substantial technical program, and organized networking events, the TSP Symposium 2013 also offers practitioners an in-depth learning opportunity with full-day tutorials on introductory and advanced TSP concepts.

“I am very excited about this year’s lineup of keynote speakers and technical presenters. The symposium should be stimulating with presentations on a broad array of topics related to quality-focused software development. It is also an excellent way for participants to network and exchange diverse ideas about how they have used the PSP/TSP approach to achieve their software quality goals,” said Mark Kasunic, Symposium co-chair.