Rules for engaging in projects to develop software for human-like cognitive abilities in software
Here are some rules that seem almost mandatory for engaging in software projects that could be misconstrued as AI.
Specific scope
The specific scope should be "general cognitive ability" and "human-like general intelligence". Hardcoding solutions to other problems is not in scope. A more specific scope should be mandatory, especially one that includes testable definitions.
Minimal "hardcoding"
Don't spend 10 years typing up knowledge graphs or whatever.
Limited number of lines of code
No more than 1 million lines of code. Excessively large numbers of lines of code (LOC) is often a signal of poor software quality. Additionally, as you increase the number of lines of code in a project, the more time you need to spend typing out those lines. Should those lines be generated, then the more time you need to spend maintaining those lines or the generator lines.
Limited numbers of programmers
Do not assume that a large software team will solve the fundamental problems of the project. Generally, prefer plans that call for only a single programmer (yourself). One possible heuristic is something like "if this person is added to the project, will the probability of success increase by at least 1,000x? If not, then an additional person is not going to help significantly." .... especially if you estimate the probability of success at like 0.00000001% from first principles.....
Implementation time constraints
Must take less time to implement than it would be to reverse engineer the human brain and make brain emulation software. Since this has never been done before either, make up some estimates and then stick to those estimates. For example, if the amount of time you estimate for brain emulation research to be completed to emulate a functional human brain would take, say, 30 years, then 30 years is the upper limit on implementation time of a non-emulation approach.
No hand-waving
All hand-waving should be tracked and relentlessly minimized.
Must not require impossibly huge amounts of computational resources
Implementations must run on commodity computing hardware.
edit (2023): Maybe it's time to update this rule in light of the bitter lesson.
Prefer implementations that can be tested using "universal psychometrics"
http://users.dsic.upv.es/~flip/papers/TR-upsycho2012.pdf
... or something at least as useful.
No hard-coded grammar
No rules about English should be in the source code.
No "emergence"
There should be no expectation of "emergence".
Must have very good reasons for using any previous approach
Since previous approaches seem to have not quite worked, every use of some previous approach must, at minimum, have good reasons for why previous approaches failed and why "this time it's different".
Comprehensive planning
There are many possible designs that may or may not work. Since it is unlikely that any given randomly picked design is likely to be in the set of workable candidates, there must be good reasons for picking a certain design. Additionally, if only a single design is picked then there must be good reasons for not comprehensively investigating competing alternatives.
The goal should be somewhat limited computational cognition
There is no reason to target "really, really smart" cognition at first. Even "profoundly mentally challenged" software would be an excellent and useful result.
Pick good role models
Snails are plenty cognitive, even though they have only 20,000 neurons. Some snails are actually rather social with each other. Also, computational brain models have long since surpassed 20 kNeurons in size.
Another interesting role model is the (talking) parrot especially when it is mimicing sounds from its environment. "Being talkative" is not necessarily a requirement for human-like cognitive ability, but communication is one of the few ways to peak into an operational chunk of brain matter.