Remote screen viewer text only

Since installing Ubuntu for the first time in 2007, have you been slowly falling into the trap of computer stallman-like confusion? Do you now hate anything with a GUI including browser? Do you check your mail with the command line even though you are behind seven proxies? But, do you still want to play Minecraft? If so, this command-line-only screen viewer could be a tool to use a GUI rather than a technical one.

This remote screen viewer was created by Python [louis-e] And, once installed, allows the client to view the server’s screen even if the client is a text-only console. [louis-e] It shows up from within a Windows command prompt. The script server scans the screen and then displays it on the console using the different colors and textures available. As a result, both the resolution and the refresh rate are quite low, but it is still effective enough to run Minecraft and perform other GUI-based tasks, unless there is a fine text to read anywhere.

The video below only shows a display of the remote screen viewer, and we can imagine a lot of use outside of this proof-of-concept game demonstration. Installing a desktop environment and window manager is not something that is strictly necessary for all computers, so if you do not want to waste time and resources installing one of these components, it is a practical solution. If you are looking for remote desktop software for more specific machines, take a look at this software that enables remote desktops on antique Macs.

Why Industry 4.0 should think more like Apple

For industrial applications, the Internet of Things carries the risk of becoming a thief’s Internet. Perhaps the industries that are using connected solutions should take a page from the Apple book and lock their infrastructure.

That’s what ethical hackers say

Since digital processes are deeply embedded across every industry, this means that industry control systems were tested in this year’s Pwn2Own competition. Hackers were asked to find vulnerabilities in industrial software and systems.

Competition winners Dan Kueper and Thisus Alchem ​​found that once they were able to access the IT networks used by these companies, it was “relatively easy” to crash systems and equipment.

This is partly due to the fact that at this stage of the transition, most of the equipment used in manufacturing was not originally designed to connect to the Internet or has weak or outdated security.

IT certainly understands this, which is why industrial IoT installations tend to secure the IT networks they use, but this means that if those networks are infiltrated, most of the deployed devices lack additional protection. And that means there are countless potential attack surfaces.

It’s never been better, but the threat to critical infrastructure is growing.

When things go wrong

Once security is breached, attackers can seize equipment, change the process, or simply choose to produce shutters. This can have huge consequences – across the company, its customers and partners, and across the already created supply chain.

“Systems typically operate 24/7 in a factory environment, so there is very little chance of patching vulnerabilities,” said Louis Prim, consultant at ICT Group. In addition, there are many legacies, because the machine is purchased for the long term, and there is usually no opportunity to install antivirus applications. All of this makes the industrial sector at risk for the hostile side. “

I’m talking MIT Technology Review, Pwn2Own Winners warn that security lags behind in industry regulation. How a successful attack against a target a few years ago used an insecure HVAC system to penetrate the corporate network, showing the need to protect every available endpoint.

These days, more than ever, people live on the edge of security.

It was written on the wall

It’s not that we don’t see this kind of problem.

The evolution of industrial IoT has seen countless creations of different standards with different security levels. This has led many in space (including Apple) to create common standards for connected devices.

Matter, the consumer IoT standard which is the first result of that effort, should be reached this year, when more industry thread standards are already seeing the installation. (I hope to hear more about Matter soon, possibly at WWDC.)

[Also read: WWDC: Is Apple preparing to give iPad a mammoth upgrade?]

“The thread is based on the universally established Internet Protocol version 6 (IPv6) standard, which makes it extremely powerful. A thread network does not rely on a central hub, such as a bridge, so there is no single point of failure. And the thread has the ability to self-heal – if a node (or an attachment to your thread network) becomes unavailable, the data packets will automatically select an alternate route and the network will simply keep working, “Eve Systems explained.

Apple way

To some extent, one way to protect any device is to follow Apple’s core mission, which is to make sure that systems handle as little information as possible.

While the effort has arguably slowed the company’s progress in developing AI compared to more cloud-based competitors, Apple’s focus on keeping intelligence on the edge is increasingly seen as appropriate.

Counterfeit technology and business and decision making, for example, seem to be developing industrial IoT systems that follow a model where intelligence sits on the edge.

Combined with other emerging network technologies, such as SD-WAN or private 5G networks, end-to-end intelligence helps secure individual networks by helping to close individual endpoints.

The problem, of course, is that not every connected system is so smart as to be secure, while the various priorities of IT and operational intelligence mean that attackers enjoy the luxury of potential vulnerability to attack.

And before it even dumbs down, short-sighted governments are forcing siding on mobile systems and platforms and the back doors of inherently insecure device protection that we increasingly rely on to protect our connected infrastructure.

Maybe Enterprise IoT needs to borrow a page from Apple Books and Design Systems that is instinctively more secure than anyone thinks they need to be? Because it is only a matter of time before they find out that they will not do less.

Follow me TwitterOr join me on AppleHolic’s bar & grill and Apple discussion group on MeWe.

Copyright © 2022 IDG Communications, Inc.

Windows 11: Should You Bypass Hardware Blocks?

If you’re like most PC users, your current computer can’t run Windows 11 Microsoft has put a line in the hardware sand to ensure that modern machines with only certain features that offer strict security can run Windows 11.

Kind of good. The company provides a solution, I will discuss in a moment. The question is whether you (or your users) should take advantage of this loophole to upgrade your PC to Windows 11.

First, if you want to know if a computer can run Windows 11, you can To be able to Use the PC Health Check app, Microsoft’s diagnostic tool. But if your PC doesn’t support Windows 11, then Microsoft’s app doesn’t do a great job of explaining why. Instead, I recommend using the Windows 11 Requirement Verification Tool from or WhyNotWin11 available on GitHub. Both tools provide details on why a machine should not run Windows 11. On my personal laptop at home, for example, the processor hypervisor cannot support hardware for Enforced Code Integrity, and Windows 11 doesn’t like displaying graphics.

But you do There is To meet all the requirements of Microsoft to get an acceptable experience with Windows 11? If a machine is not old, what if I put an item from Windows 11?

Windows 11 bypasses the hardware block

This has often been the case over the years, with Microsoft putting somewhat shaky room in the hardware mandate for Windows 11, indicating that you can use the following registry keys to bypass the hardware block:

Registry key: HKEY_LOCAL_MACHINE\SYSTEM\Setup\MoSetup

Name: AllowUpgradesWithUnsupportedTPMOrCPU


Value: 1

This tactic comes with a precaution from Microsoft, such as if you install Windows 11 on a PC that does not meet the minimum hardware requirements, “your PC will no longer be supported and will not be entitled to receive updates.” It doesn’t come. “

Note, however, that Microsoft has not yet been able to enforce the threat of such users not receiving updates. I personally think this is more of a performance alert: if there are some sort of performance issue with some unsupported CPUs, my guess is that Microsoft won’t work to fix the issue.

For personal computer systems – especially for intelligent end users who like to try new things and get good backups, and especially for getting back on extra computers – I have less concern about using the work that Microsoft itself has provided. Apparently it’s a blind eye rolling and realizing that we want to play around.

But do you really want to use this solution in business?

For Something Business I would argue that you do not need any of these hardware mandates. The fact is that Microsoft has added more protection to its enterprise customers than to individuals or small businesses. Some key security features for Windows 11 are only supported if you have proper licensing and Windows Enterprise – for example, Credential Guard, in which Microsoft writes:

“Windows 11 uses hardware-enabled, virtualization-based security capabilities to protect systems from pass-the-hash or pass-the-ticket authentication attacks. This helps prevent malware from accessing system privacy even if the process is running with admin privileges. In the future, Credential Guard will be enabled by default for companies using the Enterprise version of Windows 11. “

For this reason, running Windows 11 requires hardware virtualization support and a TPM 2.0 chip. But unless you buy Windows 11 Enterprise, you will not be able to deploy Credential Guard.

Windows 10 is a great option for many

That said, it may be too early to move your users to Windows 11 right now. Even businesses now buy computers To be able to Running Windows 11 could be better than running Windows 10 for many years to come.

For many of us who have a computer at home as well as the ones we use in the office, having a different operating system on two machines can be confusing. The two items that frequently visit me between Windows 11 and Windows 10 are the centralized start menu and taskbar. With the Windows 10 menu on the left side of the screen and the Windows 11 widgets now on the left, I find myself clicking on the Widgets menu when I want to turn off the Windows 11 computer. And the modified Windows 11 taskbar means I’m still stumbling a bit to find cut, paste, and other tools.

If your machine is powered by Windows Update and qualifies for Windows 11, it should be offered on your system now. If you choose not to install Windows 11, you may be offered this at a later date. Remember, you can use Intune in addition to the registry key or group policy to place machines in Windows 10 instead of going to Windows 11. Windows 11 will not be given to business devices powered by Intune or WSUS; An administrator must specifically approve Windows 11 upgrades.

Lately I’ve been helping people buy new computers, often slightly older laptops which are a good standard. These PCs support running Windows 11, but for now I’m setting up registry keys to keep systems in Windows 10. I plan to help them move up to 11 when the time is right

For my own business, many of my users still have Windows 10 in their homes, I am currently choosing to keep firm computers in Windows 10. I find it easy for users to have the same type of computer at home and at work. Over time, we’ll be moving to more and more machines in Windows 11, and then I’ll decide whether to use the bypass strategy to keep any older systems in Windows 11.

Copyright © 2022 IDG Communications, Inc.

How to move a full size church organ from a house to a museum

As a hobby of electronics we are grateful to our spouses and flatmates who endure all our weird equipment and cluttered projects in their home. But sharing a home with a pipe organ enthusiast takes a different level of dedication: In the 1970s, an organist from Bristol went to their home in an attempt to install a full-size church organ, effectively transforming the modest residence into a huge musical instrument. Although recently, the house has moved to new owners who, understandably anxious to reclaim some space, have listed the entire system on eBay.

A pipe organ is installed in an attic
There is no cash in this attic; Just lots of zinc pipes and pneumatic tubing.

Fortunately, the auction was won, not by some scrap metal dealer [Look Mum No Computer], Our favorite expert on weird musical instruments. He drove all the way out of Kent to help separate the organ and to stuff dozens of pipes, mile wires and countless valves, tubes, latches and switches into his van. Once back home, he faces the daunting task of reuniting the whole lot into something capable of playing music, which he is currently documenting in a video series.

Organ’s new home is this museum is (not) obsolete, where his own house is decorated in such a way that he has spent most of his life in it. The first step in making it work was to ignite the blower, which effectively had a powerful electric air pump and a pressure-regulating mechanism. Once this was done, a row of pipes was added to test the actuation system. It consists of a set of solenoids that simply open or close the air supply to each pipe. [LMNC] An earlier project still had an Arduino-based organ driver system that allowed him to attach a MIDI keyboard to a partially-complete device and play a few notes on it.

There is still much to be done, but we are certainly impressed [LMNC] Has achieved so far and cannot wait for the organ to regain its former glory. We already knew that you could control pipe parts with MIDI and we saw a lot of small parts made from scratch. Thanks for the tip, [hackbyte]!

Finally! A cyberdeck you want to use

Cyberdecks are designed for exciting projects, some rough and others nice, but this is something that even the most enthusiastic can agree – these home-made portable computers are not always the most convenient to use. So we are very happy to see this machine [TRL]Since it looks aesthetically pleasing to CyberDeck and renders it in a shape that looks like it might be quite practical to use.

It takes a Raspberry Pi and a Waveshare 1280 × 400 capacitive touch screen and mounts this combo with a keyboard on an unusually well-designed 3D printed chassis. With a flat screen, it resembles the venerable TRS-80 Model 100 “slab” computer of the early 1980’s, but turn it upside down, and an amazingly usable laptop will appear. Power comes from an external battery pack with a lead, but this is more than necessary due to heat handling problems with PSU boards. The Finishing Touch is a stylish custom laptop bag, creating a combo that we’ll take on the train any day to get articles out of the hack.

Looking around, we think maybe this could give Clockwork Devtarm a run for its money. Alternatively, you can take a look at this upgraded TRS-80 Model 100.

Thanks [The Kilted Swede] For the tip

Junk I bought: My PSU won’t fix it

I have an Acer monitor that I’ve owned for about 15 years, and thanks to the extra money for a DVI socket sporting model for HDMI compatibility it still finds a place as one of my desktop monitors. It has a power brick that supplies it with 1 2V at 4.5 A and over the years it has made an annoying noise. Something magnetic is loose, and I really should replace it. So I went to AliExpress, and placed an order for a 12 V, 5 A power brick.

It’s a heavyweight

PS Marked as a PSU brick, 12V 5A
So far so good.

These units are quite standard, a box about 130mm by 60mm with an IEC socket at one end and a trailing cable at the other end for low voltage. I had enough of them in my hand over the years to know what to expect, so I was disappointed to find that when I got my PSU it was suspiciously light. 86 grams compared to what I expected about 250 grams, so I started to get a rat smell. Enter the world of a disconnected, and small switch-mode main power supply.

Access to Fort Knox should be easier than opening a main power supply in general, as they are ultrasonically welded together for safety. The few times I’ve done this required some dremal time and some swearing, so when this case turned out to be fairly easy to open with a screwdriver, it was clear that it wasn’t a high-quality item. Certainly my suspicions were confirmed, because there was a much smaller board inside. It is clear that this is not a 5 A power supply, so what There is I accepted?

For a fake, it can be bad

A small PCB in a large enclosure
… But not so good inside.

There were elements on the board that I hope for a smaller switch-mode main PSU. Rectifier, electrolytic capacitor, control chip, opto-isolator, ferrite transformer. It’s a through-hole board, and unlike some plug-top chargers, designers have given them plenty of space. Flip it over and there is a reasonably healthy 6.25mm physical separation between the two sides with an extra matching slot at the bottom of the opto-isolator. I can’t comment on the quality of the transformer without evaluating it separately, but maybe it can be a little more chunky.

The board itself may be reasonable, even if it is in a thin box attached to a suspicious hair-thin conductor and protected only by an adhesive tab. Zooming in on the chip I got a CSC7224, a little 18 W 8-pin DIP. It is a generic chip that is available from multiple Chinese manufacturers and it implements a pretty straightforward switch-mode PSU. It seems to follow the circuit quite closely on the data sheet without the main filter, which means it is probably a functional and terribly insecure 12 V supply module. I would be happy if I had a good need for 1.5A.

So I was taken on a journey by a supplier to the other side of the world, and for your entertainment and development I turned it into a hacked article. Props to AliExpress for this, when I raise a dispute over the photo and hardware details, they return without question. What else can I take from this without the warning of not playing random PSU roulette again? The first thing is that, from the manufacturer’s point of view, it is too cheap to be a successful counterfeit product. If I can tell by the weight of it that it’s fake the moment I pick it up, they’ve failed, so I’m curious why they didn’t make it more believable by putting a little more weight on it. At least the chip has overcurrent protection built-in, so it will refuse to serve 5A instead of exploding.

In this way I exposed myself to ridicule in the comments, and obviously I should have given a little more thrust. Have any of you ever been stunned by a fake PSU?

Panel PCB graphically with HM-panelizer

When you’re working with PCB and building a single unit to knock out those Chinese fabs, just press a few buttons to go from layout to productive Gerber files, no matter what PCB layout tool you prefer. But, once you create a set of PCBs that make up a larger system, or create multiple copies for efficient production, you can’t go far without entering the PCB panelization industry. We’ve seen a few options over the years, and here’s another one that looks pretty promising – hm-panelizer by [halfmarble] This is a cross-platform Python GUI application, which benefits Kiwi, so it should run fairly well on most major platforms without too much hassle. The tool is in the early stages of development, so for the time being it is limited to handling only straight PCB edges with a horizontal mouse-bite, but we are sure that given the time and support it will quickly increase the more general purpose capabilities.

In an ideal world, open source tools like KiCAD would have a built-in panelizer, but for now we can only dream and hm-panelizer might be good enough for some people. For more choices about panelizing, check out our guide to make it easier, and here’s another way to do it just to dilute the water.

Best Apple Watch Deal: June 2022

To sue for COBOL

Perhaps more unexpectedly, on March 14 of this year, the GCC mailing list received an announcement about the release of the first COBOL front-end for the GCC compiler. For the uninitiated, COBOL first saw its release in 1959, making it one of the oldest programming languages ​​in 63 years that is still used regularly. The reason for its stability is its focus from the beginning as a transaction-based, domain-specific language (DSL).

Its acronym refers to general business-oriented language, which explicitly refers to the domain it is targeting. Even with the current COBOL 2014 standard, it’s still basically the same primarily transaction-based language, while adding support for structured, systematic, and object-oriented programming styles. Taking most of its core parts from Grace Hopper’s flow-matic language, it allows one to skillfully describe business logic as one would encounter a financial institution or business, in plain English.

Unlike the old GnuCOBOL project – which translates COBOL to C – the new GCC-COBOL front-end project eliminates that intermediate step and compiles COBOL source code directly into binary code. All of this raises the question of why an entire human-year was invested in this effort for a language that has probably been declared ‘dead’ for at least half its 63-year existence.

Does it make sense to learn or even use COBOL today? Do we need a new COBOL compiler?

Punch line found

An IBM 704 mainframe used at NACA in 1957.  (Credit: NASA)
An IBM 704 mainframe used at NACA in 1957. (Credit: NASA)

To fully understand where COBOL came from, we need to go back to the 1950s. Many years before minicomputers like the PDP-8, there was a time when home computers like the Apple i and Qin would not mind. These days dinosaurs are stuck in the depths of universities and businesses with increasingly transistorized mainframes and highly unequal system architectures.

Such differences existed even within a single manufacturer series of mainframes, for example IBM’s 700 and 7000 series. Since each mainframe had to be programmed for its intended purpose, usually for scientific or commercial purposes, this often means that software for a business or university will not run on new hardware without modifying or rewriting old mainframes, which adds significantly to the cost. .

Even before COBOL came on the scene, this issue was raised by BNF celebrity John W. Recognized by people like Bax, who in late 1953 proposed to his superiors at IBM the development of a practical alternative to assembly language. The FORTRAN scientific programming language, along with the LISP mathematical programming language, both primarily target the IBM 704 scientific mainframe.

FORTRAN and other high-level programming languages ​​offer two advantages over programming in mainframe assembly language: portability and efficient development. The latter provides a modular system that allows scientists and others to create their own programs as part of their research, primarily because of being able to use single statements in high-level language that translates to an optimized set of assembly instructions for hardware. , Study or other applications instead of learning the architecture of a particular mainframe.

A high-level language portability feature allows Fortran programs for scientists to share with others who can then run it in their institute’s mainframe, regardless of the mainframe’s system architecture and other hardware details. All that was needed was an available FORTRAN compiler.

UNIVAC I Operator Console at the Science Museum in Boston, USA.
UNIVAC I Operator Console at the Science Museum in Boston, USA.

Where FORTRAN and LISP focused on simplifying programming in the scientific domain, businesses had very different needs. Businesses follow the rules set by the tax office and other official instances, working on strict sets of rules that must be followed to convert inputs such as transactions and revenue flows into pay-rolls and quarterly statements. Transforming those written business rules into something that works exactly the same way in a mainframe was a major challenge. It was here that Grace Hopper’s flow-matic language, formerly Business Language 0, or B-0, provided a solution that targeted UNIVAC I, the world’s first dedicated business computer.

Grace Hopper’s experiences indicate that the use of common English words was much preferred by business over symbols and mathematical notation. Miss Hopper’s role as technical adviser to the CODASYL Committee, which developed the first COBOL standard, was a recognition of both the success of Flow-Matic and Miss Hopper’s expertise in this area. As he later stated in a 1980 interview, COBOL 60 is 95% flow-matic. The other 5% came from competing languages ​​- such as IBM’s COMTRAN language – which had the same idea, but a very different implementation.

Interestingly, a feature of COBOL prior to the 2002 standard was its column-based coding style, derived from the use of 80-column punch cards. This brings us many feature updates to the COBOL standard over the decades.

The value of their time

An IBM COBOL coding form from the 1960s.
An IBM COBOL coding form from the 1960s.

One interesting aspect of domain-specific languages ​​in particular is that they reflect both the state of the technology as well as the domain spoken at the time. When COBOL was used in the 1960’s, programming was not done directly on computer systems, but usually through code given in the mainframe in the form of punch cards, or, if you’re lucky, magnetic tape. In the 1960s, this meant that ‘running a program’ involved handing over a punched card or special coding form to the mainframe quarrelsome people, who would run the program for you and give you back the results.

These intermediate steps imply additional complexity when developing new COBOL programs, and the column-based style was the only option with the COBOL-85 update. However, with the next standard update in 2002, many changes were made, including the elimination of column-based alignment, and the adoption of free-form code. This update also adds object-oriented programming and other features, including previously limited string and more data types in numerical data representations.

What remained unchanged was COBOL’s lack of code blocks. Instead the COBOL source is divided into four sections:

  • Identification department
  • Department of Environment
  • Data section
  • Methods section

The Identification section specifies the name and meta information about the program in addition to the class and interface specification. The Environment Division specifies the features of any program depending on the system it runs on, such as the file and character set. The data section is used to declare variables and parameters. The Procedures section contains a statement of the program. Finally, each section is subdivided into sections, each of which is made up of paragraphs.

An IBM z14 mainframe from 2017, based on the IBM z / architecture CISC ISA.
An IBM z14 mainframe from 2017, based on the IBM z / architecture CISC ISA.

With the latest COBOL update of 2014, the floating point type format was changed to IEEE 754, to further enhance its interoperability with the data format. However, Charles R. As Martin noted in The Overflow in his difficult COBOL introduction, COBOL would be an accurate comparison with other domain-specific languages, such as SQL (introduced 1974). One can add to that comparison like PostScript, Fortran or Lisp.

While it is technically possible to use SQL and PostScript for regular programming and to mimic the features of DSL in generic (system) programming language, this is not a quick or efficient use of time. All of which rather depict the pronunciation To be For these DSLs: Programming as efficiently and directly as possible within a specific domain.

This point is rather briefly illustrated by IBM’s Program Language One (PL / I) – introduced in 1964 – a common programming language intended to compete with everything from FORTRAN to COBOL, but ultimately none of them. Failed to pass. , Neither FORTRAN nor COBOL programmers can be assured of PL / I qualification.

It is important to understand that you will not write operating system or word processor in any of these DSLs. This lack of generosity both reduces their complexity, and that is why we should judge them by their merits as a DSL for their intended domain.

The right tool

An interesting aspect of COBOL is that the committee that created it was not made up of computer scientists, but of people from the business community, strongly influenced by the needs of manufacturers such as IBM, RCA, Sylvania, General Electric, Philco, and so on. The National Cash Register, for which business owners and government agencies with whom they have done business had a good experience.

As a result, the need to define database queries and related skills efficiently has shaped SQL as well as the need to streamline business transactions and management over decades. Even today most of the banking and stock trading in the world is governed by the mainframe code written in COBOL, mainly due to decades of language refinement to eliminate ambiguity and other problems which can lead to very costly bugs.

Attempts to port business applications written to COBOL have shown that the problem with moving statements from a DSL to plain language is that the latter has nothing to do with speculation, protection, and features, which is why DSL was created in the first place. . The more generic a language is, the more unintended consequences a statement can have, meaning that instead of verbally porting a COBOL or FORTRAN (or SQL) statement, you need to keep in mind all the checks, limitations, and security. Original languages ​​and their transcripts.

Ultimately, any attempt to port this type of code to a generic language will inevitably result in DSL being copied to the target language, although bugs are more likely to occur for a variety of reasons. Which means that when a generic programming language can implement the same functionality as those DSLs, the real question is whether it is desirable at all. Especially when the cost of downtime and errors is measured in millions of dollars per second, as in a country’s financial system.

The attraction of a DSL here is that it avoids many potential corner cases and problems without implementing features that enable these problems.

Where GCC-COBOL fits

Despite strong demand, there is an acute shortage of COBOL developers. Although GCC-COBOL is – like GnuCOBOL – not a formally valid compiler to be adopted anywhere near the IBM z / OS-powered mainframe at a financial institution, it does play an invaluable role in enabling easy access to the COBOL toolchain. It then enables hobbies and students to develop into COBOL, whether for fun or for a potential career.

Without investing in a proprietary toolchain and associated ecosystem, a business can also use such an open-source toolchain to replace legacy Java or similar pay processing applications with COBOL. According to the developer behind GCC-COBOL in announcing the mailing list, this is a goal: to enable mainframe COBOL applications to run on Linux systems.

While financial institutions are still more likely to jump on the IBM Z system’s mainframe (‘Z’ stands for ‘Zero Downtime’) and the corresponding Bulletproof Service Agreement, it is good to see such an important DSL becoming more accessible to all. No strings attached.

A redesigned MacBook Air could debut at WWDC – without an M2 chip