ãã㯠JavaScript & Node.js ã®ä¿¡é Œæ§ã®ããã®A-Zãªã¬ã€ãã§ãã
æ¬ã¬ã€ãã¯ã沢山ã®çŽ æŽãããããã°èšäºãæžç±ãªã©ã®äžã«ããæ§ã
ãªããŒã«ããå
容ããã¥ã¬ãŒã·ã§ã³ããèŠçŽããŠäœãããŠããŸãã
åºç€ã¯ãã¡ããã®ããšãå€ãã®ã¢ããã³ã¹ããªãããã¯ïŒæ¬çªç°å¢ã§ã®ãã¹ãã»ãã¥ãŒããŒã·ã§ã³ãã¹ãã»property-basedãã¹ãã»æŠç¥çã§ãããã§ãã·ã§ãã«ãªããŒã«ã«ã€ããŠãªã©ïŒãŸã§åŠã¹ãæ ã«åºãŸãããïŒ ãã®ã¬ã€ããé ã ãŸã§èªã¿ããã°ãããªãã®ãã¹ãã¹ãã«ã¯äžŠã®ã¬ãã«ã倧ããåé§ããããšã§ãããã
ãŸãã¯ãã©ããªã¢ããªã±ãŒã·ã§ã³ã«ãšã£ãŠãæ ¹å¹¹ãšãªãæ®éçãªãã¹ãã®ç¿æ
£ãç解ãããšããããå§ããŸãããã
ãããŠãããã³ããšã³ã/UIãããã¯ãšã³ããCIããããã¯ãªããªããã®å
šãŠã§ããèªåã®èå³ã®ããåéãæ¢æ±ããŠãããŸãããã
- JavaScript & Node.js ã®ã³ã³ãµã«ã¿ã³ãã§ã
- ð Testing Node.js & JavaScript From A To Z - 10æé以äžã®åç»ã14çš®é¡ã®ãã¹ãã40以äžã®ãã¹ããã©ã¯ãã£ã¹ãåãæ±ã£ããåããããããªã³ã©ã€ã³ã³ãŒã¹ã§ã
- Twitterã¯ãã¡ã
-
ðµð±ããŒã©ã³ãèª - Michal Biesiadaããã®è²¢ç®
-
ðªðžã¹ãã€ã³èª - Miguel G. Sanguinoããã®è²¢ç®
-
ð§ð·ãã«ãã¬ã«èª - Iago Angelim Costa CavalcanteãããDouglas Mariano Valeroãããkooogeããã®è²¢ç®
-
èªåã®èšèªã«ç¿»èš³ãããã§ãã? issueããã²ç«ãŠãŠãã ãã ð
ä»ãã€ã³ã¹ãã€ã¢ãããã£ãã²ãšã€ã®ã¢ããã€ã¹ïŒ1çºã®ç¹å¥ãªåŒŸäžžïŒ
綺éºãªãã¹ããæ§ç¯ããããã®åå° (12çºã®åŒŸäžž)
ããã¯ãšã³ãããã³ãã€ã¯ããµãŒãã¹ã®ãã¹ããå¹æçã«æžãïŒ8çºã®åŒŸäžžïŒ
ã³ã³ããŒãã³ããã¹ããE2Eãã¹ããªã©ãå«ãWeb UIã®ãã¹ããæžãïŒ11çºã®åŒŸäžžïŒ
ç£èŠå¡ãç£èŠãã - ãã¹ãå質ã枬ã (4çºã®åŒŸäžž)
JSã®äžçã«ãããCIã®ã¬ã€ãã©ã€ã³ (9çºã®åŒŸäžž)
â Do: ãã¹ãã³ãŒãã¯æ¬çªã³ãŒããšã¯éããŸã - éåžžã«ã·ã³ãã«ã§ãçããæœè±¡ããæé€ãããã©ããã§ããªãŒã³ã§ãæžããããªããããªãã®ã«ããŸãããã誰ã§ãäžç®èŠãŠæå³ãããã«äŒãããããªãã¹ããå¿ãããŸãããã
ç§ãã¡ã®é ã¯ãã€ãã¡ã€ã³ã®æ¬çªã³ãŒãã®ããšã§ãã£ã±ããªã®ã§ãäœèšã«ããããããã®ãè¿œå ãããããªãè³å ã®ã¹ããŒã¹ããªããŠãããŸããããããè¿œå ã§é£ããã³ãŒããæã ã®ã¡ã£ãœããªè³ã«æŒã蟌ãããªããŠããããã®ãªããããŒã ãé æ»ãããŸãããããã¯ãããããã¹ããæžãããã£ãçç±ãšéè¡ããŠããŠæ¬æ«è»¢åã§ããå®éã®ãšããããããå€ãã®ããŒã ããã¹ããè«ŠããŠããŸãçç±ã§ãã
ãã¹ããšã¯ãããå¥ã®ããšã®ããã®æ©äŒã ãšæããŸããããããã¯ãäžç·ã«åæ¥ããŠæ¥œãããããªãå奜çã§ç¬é¡ã«æºã¡æº¢ããã¢ã·ã¹ã¿ã³ãã§ãããå°ããªæè³ã§å€§ããªãªã¿ãŒã³ããããããã®ãªã®ã§ãã
ç§åŠçã«äººéã«ã¯2ã€ã®è³ã®ã·ã¹ãã ããããŸã: ã·ã¹ãã 1ã¯ãããšãã°ã¬ã©ã¬ã©ã®éè·¯ãè»ãé転ãããããªåªåã®ãããªã掻åã®ããã«äœ¿ãããã·ã¹ãã 2ã¯ãããšãã°æ°åŒã解ããããªè€éã§æèçãªæäœã®ããã«äœ¿ãããŸãã
ã·ã¹ãã 1ã§åŠçã§ãããããªãã¹ãããã¶ã€ã³ããŸãããããã¹ãã³ãŒããšåãåãæã«ã¯ããŸãã§HTMLææžãç·šéãããã®ãããªæ°æ¥œããæããããã¹ãã§ãã£ãŠã2X(17 à 24)ãšããæ°åŒã解ãæã®ããã§ãã£ãŠã¯ãããŸããã
ãã®éæã®ããã«ã¯ããã¯ããã¯ãããŒã«ãè²»çšå¯Ÿå¹æãé«ããã¹ã察象ãéžæçã«åæšéžæãããšããã§ããããå¿ èŠãªãã®ã ãããã¹ãããææ·æ§ãä¿ã€ããšãå¿ãããŸããããæã«ã¯ãä¿¡é Œæ§ãææ·æ§ãšç°¡æœããšå€©ç§€ã«ãããããã€ãã®ãã¹ããæšãŠãããšãæå¹ã§ãã
ããããå ã®ã¢ããã€ã¹ã®ã»ãšãã©ã¯ããã®ååãã掟çãããã®ã§ãã
â ããããŸããã: ãã¹ãã¬ããŒããšã¯ãçŸåšã®ã¢ããªã±ãŒã·ã§ã³ã®å€æŽãèŠä»¶ãæºãããŠãããã©ãããäŒãããã®ã§ãªããã°ãªããŸããããã®èŠä»¶ãšã¯ã³ãŒãããŒã¹ã«è©³ãããšã¯éããªã人ãã¡ã«ãšã£ãŠã®ãã®ã§ãããããã¯ããã¹ã¿ãŒããããã€ãããDevOpsãšã³ãžãã¢ã2幎åŸã®ããªãã§ãã ãã®ããã«ã¯ããã¹ãèªäœãèŠä»¶ã¬ãã«ã§è©±ãã3ã€ã®èŠç¹ãå«ãã§ãããšããã§ãããã
(1) äœããã¹ããããŠããã®ãïŒ ããšãã°ãProductsService.addNewProduct ãšããã¡ãœãã
(2) ã©ã®ãããªç¶æ³ãšã·ããªãªãæ³å®ããŠãããïŒãããšãã°ã priceãã¡ãœããã«æž¡ãããªãã£ãæ
(3) ã©ããªãã¹ãçµæãäºæããŠãããïŒ ããšãã°ã æ°ããproductãæ¿èªãããªãããš
â ãããªãã°: ãããã€ã倱æãã"Add product"ãšããååã®ãã¹ããèœã¡ãŠããŸããããã§äœãäžå ·åãèµ·ãããŠãããæ£ç¢ºã«åãããšèšããŸããïŒ
ð Note: ããããã®åŒŸäžžã«ã¯ã³ãŒãäŸãã€ããŠããŠãæã«ã¯ã€ã©ã¹ãããããŸããã¯ãªãã¯ããŠåºããŠãã ããã
â ã³ãŒãäŸ
//1. ãã¹ã察象ã®ãŠããã
describe('Products Service', function() {
describe('productãè¿œå ãã', function() {
//2. ã·ããªãª and 3. æåŸ
ããçµæ
it('priceãæå®ãããŠããªãæã productã®ã¹ããŒã¿ã¹ãæ¿èªåŸ
ã¡ã§ããããš', ()=> {
const newProduct = new ProductService().add(...);
expect(newProduct.status).to.equal('pendingApproval');
});
});
});
© ã¯ã¬ãžãã & ãã£ãšèªã
1. Roy Osherove - Naming standards for unit testsâ ããããŸããã: Arrange(æºåãã), Act(åãã), Assert(確èªãã)ãšãã3ã€ã®å·¥çšã§ãã¹ããæ§æããŸããããããããããšã§ãã³ãŒããèªã人ããã¹ãã®æ¹éãç解ããããã«è³å CPUãè²»ãããã«æžã¿ãŸãã
1ã€ç®ã®A - Arrange(æºåãã): ãã¹ããã·ãã¥ã¬ãŒããããç¶æ³ãã»ããã¢ããããããã®ã³ãŒãã§ããããã«ã¯ããã¹ãããã察象ãã€ã³ã¹ã¿ã³ã¹åãããDBã¬ã³ãŒããè¿œå ãããç¹å®ã®ãªããžã§ã¯ããã¢ãã¯/ã¹ã¿ãããããšãªã©ãå«ãŸããŸãã
2ã€ç®ã®A - Act(åãã): ãã¹ã察象ãåãããŸãã 倧æµã¯1è¡ã§æžã¿ãŸãã
3ã€ç®ã®A - Assert(確èªãã): è¿ãå€ãæåŸ ããŠããçµæãšãªã£ãŠãããã©ããã確èªããŸãã倧æµã¯1è¡ã§æžã¿ãŸãã
â ãããªãã°: ã¡ã€ã³ã®ã³ãŒããç解ããã®ã«äœæéãããã£ãŠããŸãã°ãããã1æ¥ã®ã¿ã¹ã¯ã®äžã§æ¬æ¥ã¯æãç°¡åã§ããã¯ãã®ãã¹ããæžããšããè¡çºã§è³ãããããã«ãªã£ãŠããŸããŸãã
â ã³ãŒãäŸ
describe("Customer classifier", () => {
test("ã«ã¹ã¿ããŒã500$è²»ãããæ, ãã¬ãã¢ã ãšããŠèå¥ãããããš", () => {
//Arrange(æºåãã)
const customerToClassify = { spent: 505, joined: new Date(), id: 1 };
const DBStub = sinon.stub(dataAccess, "getCustomer").reply({ id: 1, classification: "regular" });
//Act(åãã)
const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
//Assert(確èªãã)
expect(receivedClassification).toMatch("premium");
});
});
test("ãã¬ãã¢ã ãšããŠèå¥ãããããš", () => {
const customerToClassify = { spent: 505, joined: new Date(), id: 1 };
const DBStub = sinon.stub(dataAccess, "getCustomer").reply({ id: 1, classification: "regular" });
const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
expect(receivedClassification).toMatch("premium");
});
⪠ïž1.3 ãããã¯ãç±æ¥ã®èšèã§æåŸ ããæ¯ãèããèšè¿°ãã: BDDã¹ã¿ã€ã«ã®ã¢ãµãŒã·ã§ã³ã䜿ã
â
ããããŸããã: 宣èšçãªã¹ã¿ã€ã«ã§ãã¹ããæžãããšã¯ãèªè
ã«å°ããè³å
CPUã䜿ãããã«æŠèŠ³ãæŽãŸããå©ããšãªããŸãã沢山ã®æ¡ä»¶ããžãã¯ãå«ããããªåœä»€çãªã³ãŒããæžããšãèªè
ã¯è³å
CPUã沢山䜿ãããšã匷å¶ãããŠããŸããŸãã
ãªã®ã§ã宣èšçãªBDDã¹ã¿ã€ã«ã§ã expect
ã should
ãªã©ãçšãããæ補ãé¿ãã€ã€ã人éçãªèšèã§æåŸ
ããçµæãæžããŸãããã
ããããChaiãJestã欲ããã¢ãµãŒã·ã§ã³ã¡ãœããããã£ãŠãããããããŠãã®æ¬²ããã¢ãµãŒã·ã§ã³ã¡ãœãããäœåºŠã䜿ããããã®ã§ããã°ãJestã®ãããã£ãŒãæ¡åŒµããããšãã«ã¹ã¿ã Chaiãã©ã°ã€ã³ãæžãããšãæ€èšããŠã¿ãŠãã ããã
â ãããªãã°: ããŒã ããã¹ããæžããªããªããé¢åãªãã¹ãã.skip()ã§é£ã°ãããã«ãªããŸãã
â ã³ãŒãäŸ
ð ã¢ã³ããã¿ãŒã³äŸ: èªè ããã¹ãã®æ¹éãç¥ãããã«ãåœä»€çãªã³ãŒãããããªãã®é確èªããªããã°ãªããªã
test("ã¢ããã³ãååŸããæã ã¢ããã³ã®ã¿ãååŸçµæã«å«ãŸããããš", () => {
//"admin1"ã "admin2" ãšããã¢ããã³ãšã"user1"ãšãããŠãŒã¶ãŒãè¿œå ããŠãããšä»®å®ãã
const allAdmins = getUsers({ adminOnly: true });
let admin1Found,
adming2Found = false;
allAdmins.forEach(aSingleUser => {
if (aSingleUser === "user1") {
assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
}
if (aSingleUser === "admin1") {
admin1Found = true;
}
if (aSingleUser === "admin2") {
admin2Found = true;
}
});
if (!admin1Found || !admin2Found) {
throw new Error("Not all admins were returned");
}
});
it("ã¢ããã³ãååŸããæã ã¢ããã³ã®ã¿ãååŸçµæã«å«ãŸããããš", () => {
// 2人ã¢ããã³ãè¿œå ããŠãããšä»®å®ãã
const allAdmins = getUsers({ adminOnly: true });
expect(allAdmins)
.to.include.ordered.members(["admin1", "admin2"])
.but.not.include.ordered.members(["user1"]);
});
âª ïž 1.4 ãã©ãã¯ããã¯ã¹ãã¹ããå®ã: ãããªãã¯ã¡ãœããã®ã¿ããã¹ããã
â
ããããŸããã: å
éšå®è£
ããã¹ãããŠã倧ããªãªãŒããŒãããã®å²ã«ãäœãåŸãããŸãããããã³ãŒããAPIãæ£ããçµæãè¿ããŠããã®ãªãã3æéããããŠ"ã©ã®ããã«"ãããéæããããããã¹ãããæŽã«ãã®ãããªå£ãããããã¹ããã¡ã³ãããŠããå¿
èŠããããŸããïŒ
å
¬éãããŠããæ¯ãèãããã¹ããããŠããæã¯ãåžžã«å
éšå®è£
ãæé»çã«ãã¹ããããŠããŠããã®ãã¹ããå£ããæãšããã®ã¯äœãç¹å®ã®åé¡ããã£ãæã ãã§ãïŒããšãã°ãåºåãééã£ãŠãããªã©ïŒã
ãã®ãããªã¢ãããŒãã¯behavioral testing
ãšåŒã°ããŸãã
éã«ãå
éšå®è£
ããã¹ãããå ŽåïŒãã¯ã€ãããã¯ã¹çã¢ãããŒã) - ãã©ãŒã«ã¹ãã³ã³ããŒãã³ãã®åºåããæ žå¿çãªè©³çŽ°ã«ç§»ããŸããå°ããªãªãã¡ã¯ã¿ãªã³ã°ã«ãã£ãŠãããšãåºåçµæãåé¡ãªãã£ããšããŠãããã¹ããå£ãããããããŸããã- ããã¯ã¡ã³ããã³ã¹ã³ã¹ããèããåäžãããŠããŸããŸãã
â ãããªãã°: ãã¹ãããªãªã«ãå°å¹Žã«ãªããŸã: ïŒäŸãã°ããã©ã€ããŒãå€æ°ã®ååãå€ãã£ãããšã§ãã¹ããå£ãããªã©ã®çç±ã§ïŒåã®å«ã³ããããŸããéçºè ãã¡ãCIã®éç¥ãç¡èŠãç¶ããŠããæ¥æ¬åœã®ãã°ãç¡èŠãããŠããŸãããã«ãªãã®ããå šãé©ãããšã§ã¯ãããŸããã
â ã³ãŒãäŸ
class ProductService {
// ãã®ã¡ãœããã¯å
éšã§ãã䜿ãããŠããªã
// ãã®ã¡ãœããåãå€æŽãããšãã¹ããå£ãã
calculateVATAdd(priceWithoutVAT) {
return { finalPrice: priceWithoutVAT * 1.2 };
// äžèšã®è¿ãå€ã®åœ¢ãããŒåãå€ãããšãã¹ããå£ãã
}
//public method
getPrice(productId) {
const desiredProduct = DB.getProduct(productId);
finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
return finalPrice;
}
}
it("ãã¯ã€ãããã¯ã¹ãã¹ã: å
éšã¡ãœããã0 vatãåãåãæ, 0ãè¿ã", async () => {
// ãŠãŒã¶ãŒãVATãèšç®ã§ããããã«ããªããã°ãããªãèŠä»¶ã¯ãªããæçµéé¡ã瀺ããã°è¯ããããã«ãé¢ããããã¯ã©ã¹å
éšããã¹ãããããšã«åºå·ããŠããŸã£ãŠããã
expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
});
âª ïž ïž1.5 æ£ãããã¹ãããã«ãéžæãã: ã¹ã¿ããã¹ãã€ã®ä»£ããã«ã¢ãã¯ã䜿ããªã
â ããããŸããã: ãã¹ãããã«ã¯ã¢ããªã±ãŒã·ã§ã³ã®å éšã«çµåããŠããããå¿ èŠæªã§ãããæã«ã¯å€§ããªäŸ¡å€ããããããŸãã (ãã¹ãããã«ã£ãŠäœã®ããšãå¿ããŠããŸã£ã人ã¯ãã¡ããèªã¿ãŸããã: ã¢ã㯠vs ã¹ã¿ã vs ã¹ãã€).
ãã¹ãããã«ã䜿ãåã«ç°¡åãªèªåãããŸãããïŒ ç§ããã¹ãããããšããŠããæ©èœã¯ä»æ§æžã«æžããŠããããããã¯ä»åŸæžããããããšãïŒããéããªããããã¯ãã¯ã€ãããã¯ã¹ãã¹ãã«ãªã£ãŠããŸã£ãŠããå¯èœæ§ããããŸãã
ããšãã°ããã決æžãµãŒãã¹ãèœã¡ãæã«ã©ããªé¢šã«ã¢ããªã±ãŒã·ã§ã³ãæ¯ãèãã®ããã¹ããããæã決æžãµãŒãã¹ãã¹ã¿ãããŠ'No Response'ãè¿åŽãããããšã§ãã¹ã察象ãæ£ããå€ãè¿ããŠããã確èªããããšã§ããããããã¯ç¹å®ã®ã·ããªãªäžã«ãããã¢ããªã±ãŒã·ã§ã³ã®æ¯ãèããå¿çãçµæããã§ãã¯ããŸãããããã¯ã決æžãµãŒãã¹ãèœã¡ãŠããæã«ã¡ãŒã«ãéä¿¡ãããããšãã¹ãã€ã䜿ã£ãŠç¢ºèªããããšã§ãããã - ãããä»æ§æžã«ããããæžãããŠããããšã®æ¯ãèã確èªã§ããïŒ"決æžã«å€±æãããã¡ãŒã«ãéä¿¡ãã"ïŒ
äžæ¹ã§ã決æžãµãŒãã¹ãã¢ãã¯ããŠãããæ£ããJavaScriptã®åã§åŒã³åºãããŠããããšã確èªããå Žå - ã¢ããªã±ãŒã·ã§ã³ã®æ©èœãšã¯é¢ä¿ã®ãªãå
éšã®ããšã«ãã¹ãããã©ãŒã«ã¹ããŠããŸããé »ç¹ã«æŽæ°ããªããã°ãªããªãã§ãããã
â ãããªãã°: ã©ããªãªãã¡ã¯ã¿ãªã³ã°ãããã«ããã³ãŒãäžã§äœ¿ãããŠããå šãŠã®ã¢ãã¯ãæ¢ããŠæŽæ°ããããšãå¿ èŠã«ãªã£ãŠããŸããŸãããããšããã¹ãã¯é Œãããã®ãã芪åã§ã¯ãªããéè·ã«ãªã£ãŠããŸããŸãã
â ã³ãŒãäŸ
it("æå¹ãªãããã¯ããåé€ãããæ, ããŒã¿ã¢ã¯ã»ã¹çšã®DALãæ£ãããããã¯ããšæ£ããã³ã³ãã£ã°ã§1床ã ãåŒã°ããããš", async () => {
//æ¢ã«ãããã¯ããè¿œå ããŠãããšãã
const dataAccessMock = sinon.mock(DAL);
//ãããããããªãã§ãã: å
éšå®è£
ããã¹ãããããšããŽãŒã«ã«ãªã£ãŠããŸã£ãŠããŠããã ã®å¯äœçšã§ã¯ãªããªã£ãŠããŸã£ãŠããŸãã
dataAccessMock
.expects("deleteProduct")
.once()
.withArgs(DBConfig, theProductWeJustAdded, true, false);
new ProductService().deletePrice(theProductWeJustAdded);
dataAccessMock.verify();
});
ðæ£ããã³ãŒãäŸ: ã¹ãã€ã®é¢å¿ãèŠä»¶ããã¹ãããããšã«ãããå¯äœçšã¯çµæãšããŠå éšå®è£ ã«è§ŠããŠãã
it("æå¹ãªãããã¯ããåé€ãããæ, ã¡ãŒã«ãéä¿¡ãããããš", async () => {
//æ¢ã«ãããã¯ããè¿œå ããŠãããšãã
const spy = sinon.spy(Emailer.prototype, "sendEmail");
new ProductService().deletePrice(theProductWeJustAdded);
//ããããOK: ãããå
éšå®è£
ãããªãã®ãã£ãŠ? ããã§ãããã§ãã¡ãŒã«ãéä¿¡ãããšããèŠä»¶ããã¹ãããäžã§ã®å¯äœçšãšããŠã§ã
expect(spy.calledOnce).to.be.true;
});
ç§ã®ãªã³ã©ã€ã³ã³ãŒã¹ããã§ãã¯ããŠã¿ãŠãã ãã Testing Node.js & JavaScript From A To Z
â ããããŸããã: æã«ãããã¯ã·ã§ã³ã®ãã°ã¯äºæãã¬éåžžã«éå®çãªå ¥åå€ã«ãã£ãŠããããããŸã - ãã¹ãã®å ¥åå€ããªã¢ã«ã§ããã»ã©ããã°ãæ©æã«çºèŠã§ããå¯èœæ§ãé«ãŸããŸããFakerã®ãããªå°çšã®ã©ã€ãã©ãªã䜿ãããšã§ãæ¬äŒŒçã«ãªã¢ã«ã§ããããã¯ã·ã§ã³ã®æ§ã ãªç¶æ ã«äŒŒããããŒã¿ãçæããŸããããããšãã°ãããããã©ã€ãã©ãªã䜿ããšãªã¢ã«ã£ãœãé»è©±çªå·ããŠãŒã¶ãŒåãã¯ã¬ãžããã«ãŒãæ å ±ãäŒç€Ÿåããããã¯'lorem ipsum'ããã¹ããŸã§çæã§ããŸããfakerã®ããŒã¿ãã©ã³ãã ã«ããŠãã¹ã察象ãŠããããæ¡åŒµãããããªãã¹ããäœãããšãã§ããŸããïŒéåžžã®ãã¹ãã«å ããŠã§ãã代ããã«ã§ã¯ãªãïŒããããã¯å®éã®ãããã¯ã·ã§ã³ç°å¢ããããŒã¿ãã€ã³ããŒãããããšãã§ããŸãããã£ãšé«ãã¬ãã«ãã¿ããã§ããïŒæ¬¡ã®åŒŸäžžãã¿ãŠãã ããïŒproperty-basedãã¹ãïŒ
â ãããªãã°: "Foo"ã®ãããªäººå·¥çãªå ¥åå€ã䜿ã£ãŠãããšãéçºæã®ãã¹ãã§ã¯èª€ã£ãŠã°ãªãŒã³ã«ãªã£ãŠããŸããããããŸããããæ¬çªç°å¢ã§ããã«ãŒãâ@3e2ddsf . ##â 1 fdsfds . fds432 AAAAâã®ãããªèæ±ãæååãæž¡ããŠããããã¬ããã«ãªã£ãŠããŸããããããŸããã
â ã³ãŒãäŸ
ð ã¢ã³ããã¿ãŒã³äŸ: 人工çãªããŒã¿ã®ããã§ãã¹ãã¹ã€ãŒããéã£ãŠããŸã
const addProduct = (name, price) => {
const productNameRegexNoSpace = /^\S*$/; //空çœæååã蚱容ããªã
if (!productNameRegexNoSpace.test(name)) return false; //å
¥åå€ãç°¡æçãªããã§ããã®ãã¹ã«ã¯å°éããªã
//ãªã«ãããã«ããžãã¯ããããšãã
return true;
};
test("ãã¡ãªäŸ: æå¹ãªããããã£ã§productãè¿œå ããæã確èªã«æåãã", async () => {
//"Foo"ãšããæååãå
šãŠã®ãã¹ãã§äœ¿ãããæ°žé ã«falseãªçµæãåŒãèµ·ãããªã
const addProductResult = addProduct("Foo", 5);
expect(addProductResult).toBe(true);
//誀ã£ãæå: 空çœã®å
¥ã£ãé·ãæååã§è©Šããªãã£ãã®ã§æåããŠããŸã£ã
});
it("è¯ãäŸ: æå¹ãªproductãè¿œå ããæã確èªã«æåãã", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
//çæãããã©ã³ãã ãªå
¥åå€: {'Sleek Cotton Computer', 85481}
expect(addProductResult).to.be.true;
//ã©ã³ãã ãªå
¥åå€ã®ãããã§ãäºæããŠããªãã£ãã³ãŒããã¹ã«å°éããŠãã¹ãã倱æããã
//ãã°ãæ©æçºèŠã§ããïŒ
});
â ããããŸããã: éçºè ã¯åŸã ã«ããŠããããªå ¥åå€ã®ãã¿ãŒã³ã§ãã¹ããããŠããŸããã¡ã§ããå ¥åå€ã®ãã©ãŒããããçŸå®ã®ããŒã¿ã«è¿ãæã§ããïŒâfooã䜿ããªâã®é ãèªãã§ãã ããïŒãéçºè 㯠method(ââ, true, 1), method(âstringâ , false , 0) ã®ãããªéãããå ¥åå€ã®çµåãããã«ããŒããŸããã
ãããæ¬çªç°å¢ã§ã¯ã5åã®ãã©ã¡ãŒã¿ãŒãæã€APIã¯äœåãã®çµåãã§åŒã³åºãããŸããããã®äžã®1ã€ãããã»ã¹ãèœãšããŠããŸããããããŸãããïŒFuzz Testingãèªãã§ãã ããïŒã
ããã1åã®ãã¹ãã§1000çš®ãã®å
¥åå€ãèªåã§è©Šããã©ã®å
¥åå€ãæ£ããã¬ã¹ãã³ã¹ãè¿ããªãã£ãããææ¡ã§ããããšèšã£ããã©ãã§ããïŒ
Property-basedãã¹ããšããæè¡ã¯ãŸãã«ããããã£ãŠãããŸã: ããããå
šãŠã®å
¥åå€ã®ãã¿ãŒã³ããã¹ã察象ã«æµã蟌ãããšã§ããã°ãçºèŠããæ©éãé«ããŠãããã®ã§ãã
ããšãã°ãaddNewProduct(id, name, isDiscount) ãšããã¡ãœããããããšããŸãããããã®ã¡ãœããã䜿ãã©ã€ãã©ãªã¯ã(1, âiPhoneâ, false), (2, âGalaxyâ, true) ã®ãããªã(number, string, boolean)ã®ããããçµåãã§åŒã³åºããŸãã
js-verify ã testcheck (ãã¡ãã®æ¹ãããã¥ã¡ã³ããæç¶è¯ãã§ã)ã®ãããªã©ã€ãã©ãªã䜿ããšãMochaãJestãªã©ã奜ã¿ã®ãã¹ãã©ã³ããŒã©ã€ãã©ãªã§property-basedãã¹ããå®è¡ããããšãã§ããŸãã
Update: Nicolas Dubienãããäžèšã®ã³ã¡ã³ãã§fast-checkãèŠãŠã¿ãŠãã ãããšææ¡ããŠãããŸãããããã¡ãã®ã©ã€ãã©ãªã¯ããã«ãã£ãŒãã£ãŒãæäŸããŠããŠãã¢ã¯ãã£ãã«ã¡ã³ããã³ã¹ãããŠããããã§ãã
â ãããªãã°: ã¡ãããšåãã³ãŒããã¹ããç¶²çŸ ããªããã¹ãå ¥åå€ãç¡æèã§éžæããŠããŸããŸããæ®å¿µãªããããã§ã¯ããã°ãè¡šåºãããããã®çžæ£ã§ãããã¹ãã®å¹çãäžããŠããŸããŸãã
â ã³ãŒãäŸ
import fc from "fast-check";
describe("Product service", () => {
describe("Adding new", () => {
//ããã¯ã©ã³ãã ãªããããã£ã§100åèµ°ã
it("æå¹ãªç¯å²å
ã§ã©ã³ãã ãªããããã£ã§productãè¿œå ããæãåžžã«æåããããš", () =>
fc.assert(
fc.property(fc.integer(), fc.string(), (id, name) => {
expect(addNewProduct(id, name).status).toEqual("approved");
})
));
});
});
â Do: When there is a need for snapshot testing, use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test (Inline Snapshot) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.
On the other hand, âclassic snapshotsâ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - itâs enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldnât give a clue about the failure as it just checks that 1000 lines didnât change, also it encourages to the test writer to accept as the desired true a long document he couldnât inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much
Itâs worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes
â Otherwise: A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...
â Code Examples
it("TestJavaScript.com is renderd correctly", () => {
//Arrange
//Act
const receivedPage = renderer
.create(<DisplayPage page="http://www.testjavascript.com"> Test JavaScript </DisplayPage>)
.toJSON();
//Assert
expect(receivedPage).toMatchSnapshot();
//We now implicitly maintain a 2000 lines long document
//every additional line break or comment - will break this test
});
it("When visiting TestJavaScript.com home page, a menu is displayed", () => {
//Arrange
//Act
const receivedPage = renderer
.create(<DisplayPage page="http://www.testjavascript.com"> Test JavaScript </DisplayPage>)
.toJSON();
//Assert
const menu = receivedPage.content.menu;
expect(menu).toMatchInlineSnapshot(`
<ul>
<li>Home</li>
<li> About </li>
<li> Contact </li>
</ul>
`);
});
â
Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as âtest fixtureâ) for the sake of performance improvement. While performance is indeed a valid concernâââit can be mitigated (see âComponent testingâ bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concernâââa balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
â Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? letâs investigate, oh noâââit seems that two tests were mutating the same seed data
â Code Examples
ð Anti-Pattern Example: tests are not independent and rely on some global hook to feed global DB data
before(async () => {
//adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
//I know that site name "portal" exists - I saw it in the seed files
const siteToUpdate = await SiteService.getSiteByName("Portal");
const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
//I know that site name "portal" exists - I saw it in the seed files
const siteToCheck = await SiteService.getSiteByName("Portal");
expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});
it("When updating site name, get successful confirmation", async () => {
//test is adding a fresh new records and acting on the records only
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
expect(updateNameResult).to.be(true);
});
â Do: When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). Itâs absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application wonât be able to do much rather than show a disappointing message to the user
â Otherwise: It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
â Code Examples
ð Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
it("When no product name, it throws error 400", async () => {
let errorWeExceptFor = null;
try {
const result = await addNewProduct({});
} catch (error) {
expect(error.code).to.equal("InvalidInput");
errorWeExceptFor = error;
}
expect(errorWeExceptFor).not.to.be.null;
//if this assertion fails, the tests results/reports will only show
//that some value is null, there won't be a word about a missing Exception
});
ð Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM
it("When no product name, it throws error 400", async () => {
await expect(addNewProduct({}))
.to.eventually.throw(AppError)
.with.property("code", "InvalidInput");
});
â
Do: Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mochaâââgrep âsanityâ
â Otherwise: Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests
â Code Examples
ð Doing It Right Example: Tagging tests as â#cold-testâ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)
//this test is fast (no DB) and we're tagging it correspondigly
//now the user/CI can run it frequently
describe("Order service", function() {
describe("Add new order #cold-test #sanity", function() {
test("Scenario - no currency was supplied. Expectation - Use the default currency #sanity", function() {
//code logic here
});
});
});
â Do: Apply some structure to your test suite so an occasional visitor could easily understand the requirements (tests are the best documentation) and the various scenarios that are being tested. A common method for this is by placing at least 2 'describe' blocks above your tests: the 1st is for the name of the unit under test and the 2nd for additional level of categorization like the scenario or custom categories (see code examples and print screen below). Doing so will also greatly improve the test reports: The reader will easily infer the tests categories, delve into the desired section and correlate failing tests. In addition, it will get much easier for a developer to navigate through the code of a suite with many tests. There are multiple alternative structures for test suite that you may consider like given-when-then and RITE
â Otherwise: When looking at a report with flat and long list of tests, the reader have to skim-read through long texts to conclude the major scenarios and correlate the commonality of failing tests. Consider the following case: When 7/100 tests fail, looking at a flat list will demand reading the failing tests text to see how they relate to each other. However, in a hierarchical report all of them could be under the same flow or category and the reader will quickly infer what or at least where is the root failure cause
â Code Examples
ð Doing It Right Example: Structuring suite with the name of unit under test and scenarios will lead to the convenient report that is shown below
// Unit under test
describe("Transfer service", () => {
//Scenario
describe("When no credit", () => {
//Expectation
test("Then the response status should decline", () => {});
//Expectation
test("Then it should send email to admin", () => {});
});
});
ð Anti-pattern Example: A flat list of tests will make it harder for the reader to identify the user stories and correlate failing tests
test("Then the response status should decline", () => {});
test("Then it should send email", () => {});
test("Then there should not be a new transfer record", () => {});
â Do: This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
Learn and practice TDD principlesâââthey are extremely valuable for many but donât get intimidated if they donât fit your style, youâre not the only one. Consider writing the tests before the code in a red-green-refactor style, ensure each test checks exactly one thing, when you find a bugâââbefore fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satisfies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
â Otherwise: Youâll miss pearls of wisdom that were collected for decades
â Do: The testing pyramid, though 10> years old, is a great and relevant model that suggests three testing types and influences most developersâ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that weâve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit all types of applications? shouldnât the testing world consider welcoming new testing techniques?
Donât get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, it must be wrong sometimes. For example, consider an IoT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.
Itâs time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that youâre facing (âHey, our API is broken, letâs write consumer-driven contract testing!â), diversify your tests like an investor that build a portfolio based on risk analysisâââassess where problems might arise and match some prevention measures to mitigate those potential risks
A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think itâs the devil. Everyone who speaks in absolutes is wrong :]
â Otherwise: Youâre going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes
â Code Examples
ð Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post âTesting Microservicesâââthe same wayâ
â Do: Each unit test covers a tiny portion of the application and itâs expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing worldâââthey provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.
Component tests focus on the Microservice âunitâ, they work against the API, donât mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.
â Otherwise: You may spend long days on writing unit tests to find out that you got only 20% system coverage
â Code Examples
â
Do: So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and âboom!â, some important client who relies on this field is angry. This is the Catch-22 of the integration world: Itâs very challenging for the server side to consider all the multiple client expectationsâââOn the other hand, the clients canât perform any testing because the server controls the release dates. Consumer-driven contracts and the framework PACT were born to formalize this process with a very disruptive approachââânot the server defines the test plan of itself rather the client defines the tests of the⊠server! PACT can record the client expectation and put in a shared location, âbrokerâ, so the server can pull the expectations and run on every build using PACT library to detect broken contractsâââa client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration
â Otherwise: The alternatives are exhausting manual testing or deployment fear
â
Do: Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrongâââMiddlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy (using Sinon for example) on the interaction with the {req,res} objects to ensure the function performed the right action. The library node-mock-http takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)
â Otherwise: A bug in Express middleware === a bug in all or most requests
â Code Examples
ðDoing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine
//the middleware we want to test
const unitUnderTest = require("./middleware");
const httpMocks = require("node-mocks-http");
//Jest syntax, equivelant to describe() & it() in Mocha
test("A request without authentication header, should return http status 403", () => {
const request = httpMocks.createRequest({
method: "GET",
url: "/user/42",
headers: {
authentication: ""
}
});
const response = httpMocks.createResponse();
unitUnderTest(request, response);
expect(response.statusCode).toBe(403);
});
â Do: Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are SonarQube (4,900+ stars) and Code Climate (2,000+ stars)
Credit: Keith Holliday
â Otherwise: With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix
â Code Examples
â
Do: Weirdly, most software testings are about logic & data only, but some of the worst things that happen (and are really hard to mitigate) are infrastructural issues. For example, did you ever test what happens when your process memory is overloaded, or when the server/process dies, or does your monitoring system realizes when the API becomes 50% slower?. To test and mitigate these type of bad thingsâââChaos engineering was born by Netflix. It aims to provide awareness, frameworks and tools for testing our app resiliency for chaotic issues. For example, one of its famous tools, the chaos monkey, randomly kills servers to ensure that our service can still serve users and not relying on a single server (there is also a Kubernetes version, kube-monkey, that kills pods). All these tools work on the hosting/platform level, but what if you wish to test and generate pure Node chaos like check how your Node process copes with uncaught errors, unhandled promise rejection, v8 memory overloaded with the max allowed of 1.7GB or whether your UX remains satisfactory when the event loop gets blocked often? to address this Iâve written, node-chaos (alpha) which provides all sort of Node-related chaotic acts
â Otherwise: No escape here, Murphyâs law will hit your production without mercy
â Code Examples
â
Do: Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as âtest fixtureâ) for the sake of performance improvement. While performance is indeed a valid concernâââit can be mitigated (see âComponent testingâ bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concernâââa balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
â Otherwise: Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? letâs investigate, oh noâââit seems that two tests were mutating the same seed data
â Code Examples
ð Anti-Pattern Example: tests are not independent and rely on some global hook to feed global DB data
before(async () => {
//adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
await DB.AddSeedDataFromJson('seed.json');
});
it("When updating site name, get successful confirmation", async () => {
//I know that site name "portal" exists - I saw it in the seed files
const siteToUpdate = await SiteService.getSiteByName("Portal");
const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
expect(updateNameResult).to.be(true);
});
it("When querying by site name, get the right site", async () => {
//I know that site name "portal" exists - I saw it in the seed files
const siteToCheck = await SiteService.getSiteByName("Portal");
expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
});
it("When updating site name, get successful confirmation", async () => {
//test is adding a fresh new records and acting on the records only
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
expect(updateNameResult).to.be(true);
});
â Do: When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI
â Otherwise: The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation
â Code Examples
test("When users-list is flagged to show only VIP, should display only VIP members", () => {
// Arrange
const allUsers = [{ id: 1, name: "Yoni Goldberg", vip: false }, { id: 2, name: "John Doe", vip: true }];
// Act
const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true} />);
// Assert - Extract the data from the UI first
const allRenderedUsers = getAllByTestId("user").map(uiElement => uiElement.textContent);
const allRealVIPUsers = allUsers.filter(user => user.vip).map(user => user.name);
expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
});
test("When flagging to show only VIP, should display only VIP members", () => {
// Arrange
const allUsers = [{ id: 1, name: "Yoni Goldberg", vip: false }, { id: 2, name: "John Doe", vip: true }];
// Act
const { getAllByTestId } = render(<UsersList users={allUsers} showOnlyVIP={true} />);
// Assert - Mix UI & data in assertion
expect(getAllByTestId("user")).toEqual('[<li data-test-id="user">John Doe</li>]');
});
â Do: Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed
â Otherwise: You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'
â Code Examples
// the markup code (part of React component)
<h3>
<Badge pill className="fixed_badge" variant="dark">
<span data-test-id="errorsLabel">{value}</span>
<!-- note the attribute data-test-id -->
</Badge>
</h3>
// this example is using react-testing-library
test("Whenever no data is passed to metric, show 0 as default", () => {
// Arrange
const metricValue = undefined;
// Act
const { getByTestId } = render(<dashboardMetric value={undefined} />);
expect(getByTestId("errorsLabel").text()).toBe("0");
});
<!-- the markup code (part of React component) -->
<span id="metric" className="d-flex-column">{value}</span>
<!-- what if the designer changes the classs? -->
// this exammple is using enzyme
test("Whenever no data is passed, error metric shows zero", () => {
// ...
expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
});
â Do: Whenever reasonably sized, test your component from outside like your users do, fully render the UI, act on it and assert that the rendered UI behaves as expected. Avoid all sort of mocking, partial and shallow rendering - this approach might result in untrapped bugs due to lack of details and harden the maintenance as the tests mess with the internals (see bullet 'Favour blackbox testing'). If one of the child components is significantly slowing down (e.g. animation) or complicating the setup - consider explicitly replacing it with a fake
With all that said, a word of caution is in order: this technique works for small/medium components that pack a reasonable size of child components. Fully rendering a component with too many children will make it hard to reason about test failures (root cause analysis) and might get too slow. In such cases, write only a few tests against that fat parent component and more tests against its children
â Otherwise: When poking into a component's internal by invoking its private methods, and checking the inner state - you would have to refactor all tests when refactoring the components implementation. Do you really have a capacity for this level of maintenance?
â Code Examples
class Calendar extends React.Component {
static defaultProps = { showFilters: false };
render() {
return (
<div>
A filters panel with a button to hide/show filters
<FiltersPanel showFilter={showFilters} title="Choose Filters" />
</div>
);
}
}
//Examples use React & Enzyme
test("Realistic approach: When clicked to show filters, filters are displayed", () => {
// Arrange
const wrapper = mount(<Calendar showFilters={false} />);
// Act
wrapper.find("button").simulate("click");
// Assert
expect(wrapper.text().includes("Choose Filter"));
// This is how the user will approach this element: by text
});
test("Shallow/mocked approach: When clicked to show filters, filters are displayed", () => {
// Arrange
const wrapper = shallow(<Calendar showFilters={false} title="Choose Filter" />);
// Act
wrapper
.find("filtersPanel")
.instance()
.showFilters();
// Tap into the internals, bypass the UI and invoke a method. White-box approach
// Assert
expect(wrapper.find("Filter").props()).toEqual({ title: "Choose Filter" });
// what if we change the prop name or don't pass anything relevant?
});
âª ïž 3.4 Don't sleep, use frameworks built-in support for async events. Also try to speed things up
â
Do: In many cases, the unit under test completion time is just unknown (e.g. animation suspends element appearance) - in that case, avoid sleeping (e.g. setTimeOut) and prefer more deterministic methods that most platforms provide. Some libraries allows awaiting on operations (e.g. Cypress cy.request('url')), other provide API for waiting like @testing-library/dom method wait(expect(element)). Sometimes a more elegant way is to stub the slow resource, like API for example, and then once the response moment becomes deterministic the component can be explicitly re-rendered. When depending upon some external component that sleeps, it might turn useful to hurry-up the clock. Sleeping is a pattern to avoid because it forces your test to be slow or risky (when waiting for a too short period). Whenever sleeping and polling is inevitable and there's no support from the testing framework, some npm libraries like wait-for-expect can help with a semi-deterministic solution
â Otherwise: When sleeping for a long time, tests will be an order of magnitude slower. When trying to sleep for small numbers, test will fail when the unit under test didn't respond in a timely fashion. So it boils down to a trade-off between flakiness and bad performance
â Code Examples
// using Cypress
cy.get("#show-products").click(); // navigate
cy.wait("@products"); // wait for route to appear
// this line will get executed only when the route is ready
// @testing-library/dom
test("movie title appears", async () => {
// element is initially not present...
// wait for appearance
await wait(() => {
expect(getByText("the lion king")).toBeInTheDocument();
});
// wait for appearance and return the element
const movie = await waitForElement(() => getByText("the lion king"));
});
test("movie title appears", async () => {
// element is initially not present...
// custom wait logic (caution: simplistic, no timeout)
const interval = setInterval(() => {
const found = getByText("the lion king");
if (found) {
clearInterval(interval);
expect(getByText("the lion king")).toBeInTheDocument();
}
}, 100);
// wait for appearance and return the element
const movie = await waitForElement(() => getByText("the lion king"));
});
â Do: Apply some active monitor that ensures the page load under real network is optimized - this includes any UX concern like slow page load or un-minified bundle. The inspection tools market is no short: basic tools like pingdom, AWS CloudWatch, gcp StackDriver can be easily configured to watch whether the server is alive and response under a reasonable SLA. This only scratches the surface of what might get wrong, hence it's preferable to opt for tools that specialize in frontend (e.g. lighthouse, pagespeed) and perform richer analysis. The focus should be on symptoms, metrics that directly affect the UX, like page load time, meaningful paint, time until the page gets interactive (TTI). On top of that, one may also watch for technical causes like ensuring the content is compressed, time to the first byte, optimize images, ensuring reasonable DOM size, SSL and many others. It's advisable to have these rich monitors both during development, as part of the CI and most important - 24x7 over the production's servers/CDN
â Otherwise: It must be disappointing to realize that after such great care for crafting a UI, 100% functional tests passing and sophisticated bundling - the UX is horrible and slow due to CDN misconfiguration
â Do: When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like Sinon, Test doubles, etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests
â Otherwise: The average test runs no longer than few ms, a typical API call last 100ms>, this makes each test ~20x slower
â Code Examples
// unit under test
export default function ProductsList() {
const [products, setProducts] = useState(false);
const fetchProducts = async () => {
const products = await axios.get("api/products");
setProducts(products);
};
useEffect(() => {
fetchProducts();
}, []);
return products ? <div>{products}</div> : <div data-test-id="no-products-message">No products</div>;
}
// test
test("When no products exist, show the appropriate message", () => {
// Arrange
nock("api")
.get(`/products`)
.reply(404);
// Act
const { getByTestId } = render(<ProductsList />);
// Assert
expect(getByTestId("no-products-message")).toBeTruthy();
});
â Do: Although E2E (end-to-end) usually means UI-only testing with a real browser (See bullet 3.6), for other they mean tests that stretch the entire system including the real backend. The latter type of tests is highly valuable as they cover integration bugs between frontend and backend that might happen due to a wrong understanding of the exchange schema. They are also an efficient method to discover backend-to-backend integration issues (e.g. Microservice A sends the wrong message to Microservice B) and even to detect deployment failures - there are no backend frameworks for E2E testing that are as friendly and mature as UI frameworks like Cypress and Puppeteer. The downside of such tests is the high cost of configuring an environment with so many components, and mostly their brittleness - given 50 microservices, even if one fails then the entire E2E just failed. For that reason, we should use this technique sparingly and probably have 1-10 of those and no more. That said, even a small number of E2E tests are likely to catch the type of issues they are targeted for - deployment & integration faults. It's advisable to run those over a production-like staging environment
â Otherwise: UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very different than expected
â Do: In E2E tests that involve a real backend and rely on a valid user token for API calls, it doesn't payoff to isolate the test to a level where a user is created and logged-in in every request. Instead, login only once before the tests execution start (i.e. before-all hook), save the token in some local storage and reuse it across requests. This seem to violate one of the core testing principle - keep the test autonomous without resources coupling. While this is a valid worry, in E2E tests performance is a key concern and creating 1-3 API requests before starting each individual tests might lead to horrible execution time. Reusing credentials doesn't mean the tests have to act on the same user records - if relying on user records (e.g. test user payments history) than make sure to generate those records as part of the test and avoid sharing their existence with other tests. Also remember that the backend can be faked - if your tests are focused on the frontend it might be better to isolate it and stub the backend API (see bullet 3.6).
â Otherwise: Given 200 test cases and assuming login=100ms = 20 seconds only for logging-in again and again
â Code Examples
let authenticationToken;
// happens before ALL tests run
before(() => {
cy.request('POST', 'http://localhost:3000/login', {
username: Cypress.env('username'),
password: Cypress.env('password'),
})
.its('body')
.then((responseFromLogin) => {
authenticationToken = responseFromLogin.token;
})
})
// happens before EACH test
beforeEach(setUser => () {
cy.visit('/home', {
onBeforeLoad (win) {
win.localStorage.setItem('token', JSON.stringify(authenticationToken))
},
})
})
â Do: For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector
â Otherwise: Everything might seem perfect, all tests pass, production health-check is also positive but the Payment component had some packaging issue and only the /Payment route is not rendering
â Code Examples
it("When doing smoke testing over all page, should load them all successfully", () => {
// exemplified using Cypress but can be implemented easily
// using any E2E suite
cy.visit("https://mysite.com/home");
cy.contains("Home");
cy.contains("https://mysite.com/Login");
cy.contains("Login");
cy.contains("https://mysite.com/About");
cy.contains("About");
});
â Do: Besides increasing app reliability, tests bring another attractive opportunity to the table - serve as live app documentation. Since tests inherently speak at a less-technical and product/UX language, using the right tools they can serve as a communication artifact that greatly aligns all the peers - developers and their customers. For example, some frameworks allow expressing the flow and expectations (i.e. tests plan) using a human-readable language so any stakeholder, including product managers, can read, approve and collaborate on the tests which just became the live requirements document. This technique is also being referred to as 'acceptance test' as it allows the customer to define his acceptance criteria in plain language. This is BDD (behavior-driven testing) at its purest form. One of the popular frameworks that enable this is Cucumber which has a JavaScript flavor, see example below. Another similar yet different opportunity, StoryBook, allows exposing UI components as a graphic catalog where one can walk through the various states of each component (e.g. render a grid w/o filters, render that grid with multiple rows or with none, etc), see how it looks like, and how to trigger that state - this can appeal also to product folks but mostly serves as live doc for developers who consume those components.
â Otherwise: After investing top resources on testing, it's just a pity not to leverage this investment and win great value
â Code Examples
// this is how one can describe tests using cucumber: plain language that allows anyone to understand and collaborate
Feature: Twitter new tweet
I want to tweet something in Twitter
@focus
Scenario: Tweeting from the home page
Given I open Twitter home
Given I click on "New tweet" button
Given I type "Hello followers!" in the textbox
Given I click on "Submit" button
Then I see message "Tweet saved"
â Do: Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. wraith, PhantomCSS but might charge significant setup time. The commercial line of tools (e.g. Applitools, Percy.io) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by eliminating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/CSS changes that led to the issue
â Otherwise: How good is a content page that display great content (100% tests passed), loads instantly but half of the content area is hidden?
â Code Examples
â# Add as many domains as necessary. Key will act as a labelâ
domains:
english: "http://www.mysite.com"â
â# Type screen widths below, here are a couple of examplesâ
screen_widths:
- 600â
- 768â
- 1024â
- 1280â
â# Type page URL paths below, here are a couple of examplesâ
paths:
about:
path: /about
selector: '.about'â
subscribe:
selector: '.subscribe'â
path: /subscribe
ð Doing It Right Example: Using Applitools to get snapshot comparison and other advanced features
import * as todoPage from "../page-objects/todo-page";
describe("visual validation", () => {
before(() => todoPage.navigate());
beforeEach(() => cy.eyesOpen({ appName: "TAU TodoMVC" }));
afterEach(() => cy.eyesClose());
it("should look good", () => {
cy.eyesCheckWindow("empty todo list");
todoPage.addTodo("Clean room");
todoPage.addTodo("Learn javascript");
cy.eyesCheckWindow("two todos");
todoPage.toggleTodo(0);
cy.eyesCheckWindow("mark as completed");
});
});
â Do: The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10â30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of applicationâââif youâre building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule (Fowler: âin the upper 80s or 90sâ) that presumably should satisfy most of the applications.
Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold (Jest link) and stop a build that doesnât stand to this standard (itâs also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage)âââthis will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets
â Otherwise: Confidence and numbers go hand in hand, without really knowing that you tested most of the systemâââthere will also be some fear and fear will slow you down
â Code Examples
â
Do: Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invokedâââyou thought that the âPricingCalculatorâ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales⊠Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not testedâââbeing informed that 80% of the code is tested doesnât tell whether the critical parts are covered. Generating reports is easyâââjust run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this dataâââyou might find some gotchas
â Otherwise: If you donât know which parts of your code are left un-tested, you donât know where the issues might come from
â Code Examples
Based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)
â Do: The Traditional Coverage metric often lies: It may show you 100% code coverage, but none of your functions, even not one, return the right response. How come? it simply measures over which lines of code the test visited, but it doesnât check if the tests actually tested anythingâââasserted for the right response. Like someone whoâs traveling for business and showing his passport stampsâââthis doesnât prove any work done, only that he visited few airports and hotels.
Mutation-based testing is here to help by measuring the amount of code that was actually TESTED not just VISITED. Stryker is a JavaScript library for mutation testing and the implementation is really neat:
(1) it intentionally changes the code and âplants bugsâ. For example the code newOrder.price===0 becomes newOrder.price!=0. This âbugsâ are called mutations
(2) it runs the tests, if all succeed then we have a problemâââthe tests didnât serve their purpose of discovering bugs, the mutations are so-called survived. If the tests failed, then great, the mutations were killed.
Knowing that all or most of the mutations were killed gives much higher confidence than traditional coverage and the setup time is similar
â Otherwise: Youâll be fooled to believe that 85% coverage means your test will detect bugs in 85% of your code
â Code Examples
function addNewOrder(newOrder) {
logger.log(`Adding new order ${newOrder}`);
DB.save(newOrder);
Mailer.sendMail(newOrder.assignee, `A new order was places ${newOrder}`);
return { approved: true };
}
it("Test addNewOrder, don't use such test names", () => {
addNewOrder({ assignee: "John@mailer.com", price: 120 });
}); //Triggers 100% code coverage, but it doesn't check anything
â Do: A set of ESLint plugins were built specifically for inspecting the tests code patterns and discover issues. For example, eslint-plugin-mocha will warn when a test is written at the global level (not a son of a describe() statement) or when tests are skipped which might lead to a false belief that all tests are passing. Similarly, eslint-plugin-jest can, for example, warn when a test has no assertions at all (not checking anything)
â Otherwise: Seeing 90% code coverage and 100% green tests will make your face wear a big smile only until you realize that many tests arenât asserting for anything and many test suites were just skipped. Hopefully, you didnât deploy anything based on this false observation
â Code Examples
describe("Too short description", () => {
const userToken = userService.getDefaultToken() // *error:no-setup-in-describe, use hooks (sparingly) instead
it("Some description", () => {});//* error: valid-test-description. Must include the word "Should" + at least 5 words
});
it.skip("Test name", () => {// *error:no-skipped-tests, error:error:no-global-tests. Put tests only under describe or suite
expect("somevalue"); // error:no-assert
});
it("Test name", () => {*//error:no-identical-title. Assign unique titles to tests
});
â
ããããŸããã: ãªã³ã¿ãŒã¯ããªãŒã©ã³ãã§ãã5åã®ã»ããã¢ããã§ãã³ãŒããå®ãèªåæ瞊è£
眮ãç¡æã§æã«å
¥ããããšãã§ããéèŠãªåé¡ããã£ããããããšãã§ããŸãã ãªã³ãã£ã³ã°ãè£
食ã®ããã®ãã®ã ã£ãæ代ã¯ããçµãããŸãããçŸåšã§ã¯ãªã³ã¿ãŒã¯ãæ£ããã¹ããŒãããªãããšã«ãã£ãŠæ
å ±ã倱ãããŠããŸããšã©ãŒã®ãããªæ·±å»ãªåé¡ãæ€ç¥ããããšãã§ããŸãã ESLint standard ã Airbnb style ã®ãããªåºæ¬çãªã«ãŒã«ã»ããã«å ããeslint-plugin-chai-expect ã¯ã¢ãµãŒã·ã§ã³ã®ãªããã¹ãããeslint-plugin-promise 㯠resolve ããªã promise ããeslint-plugin-security 㯠DOS æ»æã«äœ¿ãããå¯èœæ§ã®ããæ£èŠè¡šçŸããeslint-plugin-you-dont-need-lodash-underscore 㯠Lodash ã®_.map(âŠ)
ãã㪠V8 ã³ã¢ã¡ãœããã®äžéšã§ãããŠãŒãã£ãªãã£ãŒã©ã€ãã©ãªã¡ãœãããã³ãŒãã䜿çšããŠããå Žåã«èŠåããããšãã§ããŸãã
â ãããªãã°: éšã®æ¥ã«ãæ¬çªç°å¢ãã¯ã©ãã·ã¥ãç¶ããŠããã®ã«ããã°ã«ã¯ãšã©ãŒã®ã¹ã¿ãã¯ãã¬ãŒã¹ã衚瀺ãããŠããªãå ŽåãèããŠã¿ãŸããããäœãèµ·ãã£ãã®ã§ããããïŒããªãã®ã³ãŒãã誀ã£ãŠãšã©ãŒã§ã¯ãªããªããžã§ã¯ããæããŠããŸããã¹ã¿ãã¯ãã¬ãŒã¹ã倱ãããã®ã§ãããããªããšãèµ·ãã£ãæ¥ã«ã¯ãå£ã«é ãæã¡ä»ããããªããŸãããã5åéã®ãªã³ã¿ãŒã®ã»ããã¢ããã§ãã®ã¿ã€ããæ€åºããããªãã®äžæ¥ãæãããšãã§ããŸãã
â ã³ãŒãäŸ
â ããããŸããã: ãã¹ãããªã³ãã£ã³ã°ãè匱æ§ãã§ãã¯ãªã©ã®å質æ€æ»ããã«ã€ãã®CIã䜿ã£ãŠããŸããïŒ éçºè ããã€ãã©ã€ã³ãããŒã«ã«ã§å®è¡ãå³åº§ã«ãã£ãŒãããã¯ãåŸãããããã«ããŠããã£ãŒãããã¯ã«ãŒã ãçãããŸãããããªããïŒ å¹ççãªãã¹ãããã»ã¹ã¯ãå€ãã®å埩çãªã«ãŒããæ§æããŠããããã§ãã(1)ãã©ã€ã¢ãŠã -> (2)ãã£ãŒããã㯠-> (3)ãªãã¡ã¯ã¿ãªã³ã°ããã£ãŒãããã¯ãæ©ããã°æ©ãã»ã©ãéçºè ã¯ã¢ãžã¥ãŒã«ããšã«æ¹åã®å埩åæ°ãå¢ããçµæãå®ç§ã«ããããšãã§ããŸããéã«ããã£ãŒãããã¯ãé ããªããšã1æ¥ã«ã§ããæ¹åã®å埩åæ°ãå°ãªããªããããŒã ã¯ãã§ã«å¥ã®ãããã¯/ã¿ã¹ã¯/ã¢ãžã¥ãŒã«ã«é²ãã§ããŸãããã®ã¢ãžã¥ãŒã«ãæ¹åããæ°ã«ãªããªããããããŸããã
å®éã«ãããã€ãã®CIãã³ããŒïŒäŸïŒCircleCI local CLI) ã¯ããã€ãã©ã€ã³ãããŒã«ã«ã§å®è¡ããããšãã§ããŸããwallaby ã®ãããªããã€ãã®åçšããŒã«ã¯ãéçºè
ã®ãããã¿ã€ããšããŠéåžžã«äŸ¡å€ã®ãããã¹ãçšã®ã€ã³ãµã€ããæäŸããŠããŸãããŸãã¯ããã¹ãŠã®å質ãã§ãã¯ã®ã³ãã³ãïŒäŸïŒãã¹ãããªã³ããè匱æ§ãã§ãã¯ïŒãå®è¡ããnpmã¹ã¯ãªãããpackage.jsonã«è¿œå ããã ãã§ãæ§ããŸããã䞊ååã®ããã« concurrently ã®ãããªããŒã«ã䜿çšããããŒã«ã®1ã€ã倱æããå Žåã§ã0以å€ã®çµäºã³ãŒããè¿ãããã«ããŸããããéçºè
ã¯ãnpm run qualityããªã©ã®ã³ãã³ããå®è¡ããã ãã§ãå³åº§ã«ãã£ãŒãããã¯ãåŸãããšãã§ããŸããgithookã䜿ã£ãŠå質ãã§ãã¯ã«å€±æãããšãã«ã³ããããäžæ¢ããããšãæ€èšããŠã¿ãŸããã(husky ã䜿ããŸãïŒã
â ãããªãã°: å質ãã§ãã¯ã®çµæãã³ãŒãã®ç¿æ¥ã«åºãããã§ã¯ããã¹ãã¯éçºã®äžéšã§ã¯ãªããåŸä»ã®åœ¢åŒçãªææç©ã«ãªã£ãŠããŸããŸãã
â ã³ãŒãäŸ
ð æ£ããäŸ: ã³ãŒãå質ã®ãã§ãã¯ãè¡ãnpmã¹ã¯ãªããã¯ãæåãŸãã¯éçºè ãæ°ããã³ãŒããããã·ã¥ããããšããŠãããšãã«èªåã§ãã¹ãŠäžŠè¡ããŠå®è¡ãããŸãã
"scripts": {
"inspect:sanity-testing": "mocha **/**--test.js --grep \"sanity\"",
"inspect:lint": "eslint .",
"inspect:vulnerabilities": "npm audit",
"inspect:license": "license-checker --failOn GPLv2",
"inspect:complexity": "plato .",
"inspect:all": "concurrently -c \"bgBlue.bold,bgMagenta.bold,yellow\" \"npm:inspect:quick-testing\" \"npm:inspect:lint\" \"npm:inspect:vulnerabilities\" \"npm:inspect:license\""
},
"husky": {
"hooks": {
"precommit": "npm run inspect:all",
"prepush": "npm run inspect:all"
}
}
â
ããããŸããã: ãšã³ãããŒãšã³ã (e2e) ãã¹ãã£ã³ã°ã¯ããã¹ãŠã®CIãã€ãã©ã€ã³ã®äž»ãªèª²é¡ã§ã - æ¬çªç°å¢ãšåäžã®äžæçãªç°å¢ããé¢é£ãããã¹ãŠã®ã¯ã©ãŠãã»ãµãŒãã¹ãšäžç·ã«ãã®å Žã§äœæããã®ã¯é¢åã§ã³ã¹ããããããŸããæé©ãªåŠ¥åæ¡ãèŠã€ããã®ãããªãã®ä»äºã§ã: Docker-compose ã¯ã1ã€ã®ãã¬ãŒã³ãªããã¹ããã¡ã€ã«ã䜿çšããŠãåäžã®ã³ã³ããã§éé¢ãããdockerç°å¢ãäœãããšãã§ããŸãããè£åŽã®æè¡ïŒäŸ: ãããã¯ãŒã¯ããããã€ã¡ã³ãã¢ãã«ïŒã¯ãå®éã®æ¬çªç°å¢ãšã¯ç°ãªããŸããAWS Local
ãšçµã¿åãããããšã§ãå®éã®AWSãµãŒãã¹ã®ã¹ã¿ããå©çšããããšãã§ããŸãããµãŒããŒã¬ã¹ã«ããå Žåã¯ãserverless ã AWS SAM ãªã©ã®è€æ°ã®ãã¬ãŒã ã¯ãŒã¯ã«ãããFaaSã³ãŒãã®ããŒã«ã«èµ·åãå¯èœã«ãªããŸãã
巚倧ãªKubernetesã®ãšã³ã·ã¹ãã ã§ã¯ãå€ãã®æ°ããããŒã«ãé »ç¹ã«çºè¡šãããŠããŸãããããŒã«ã«ããã³CI-ãã©ãŒãªã³ã°ã®ããã®æšæºçã§äŸ¿å©ãªããŒã«ã¯ãŸã å
¬åŒåãããŠããŸããã1ã€ã®ã¢ãããŒããšã㊠Minikube ã MicroK8s ãªã©ã®ããŒã«ã䜿ã£ãŠæå°åãããKubernetes
ãå®è¡ããæ¹æ³ããããŸãããããã®ããŒã«ã¯æ¬ç©ã«äŒŒãŠããŸããããªãŒããŒããããå°ãªãã®ãç¹åŸŽã§ãã ä»ã®ã¢ãããŒããšããŠã¯ããªã¢ãŒãã®å®éã®Kubernetes
äžã§ãã¹ãããæ¹æ³ããããŸããããã€ãã®CIãããã€ããŒ(äŸïŒCodefresh) ã¯Kubernetesç°å¢ãšãã€ãã£ãã«çµ±åãããŠãããå®éã®Kubernetesäžã§CIãã€ãã©ã€ã³ãç°¡åã«å®è¡ã§ããŸããä»ã®ãããã€ããŒã¯ãªã¢ãŒãã®Kubernetesã«å¯ŸããŠã«ã¹ã¿ã ã¹ã¯ãªãããå®è¡ã§ããŸãã
â ãããªãã°: æ¬çªç°å¢ãšãã¹ãç°å¢ã§ç°ãªããã¯ãããžãŒã䜿çšãããšã2ã€ã®ãããã€ã¡ã³ãã¢ãã«ãç¶æããå¿ èŠããããéçºè ãšéçšããŒã ãåé¢ãããŠããŸããŸãã
â ã³ãŒãäŸ
ð äŸ: CIãã€ãã©ã€ã³äžã§ãã®å Žã§Kubernetesã¯ã©ã¹ã¿ãçæãã (åºå ž: Dynamic-environments Kubernetes)
deploy:
stage: deploy
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
script:
- ./configureCluster.sh $KUBE_CA_PEM_FILE $KUBE_URL $KUBE_TOKEN
- kubectl create ns $NAMESPACE
- kubectl create secret -n $NAMESPACE docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL"
- mkdir .generated
- echo "$CI_BUILD_REF_NAME-$CI_BUILD_REF"
- sed -e "s/TAG/$CI_BUILD_REF_NAME-$CI_BUILD_REF/g" templates/deals.yaml | tee ".generated/deals.yaml"
- kubectl apply --namespace $NAMESPACE -f .generated/deals.yaml
- kubectl apply --namespace $NAMESPACE -f templates/my-sock-shop.yaml
environment:
name: test-for-ci
â ããããŸããã: æ£ããæ¹æ³ã§è¡ãã°ããã¹ãã¯24æé365æ¥ã»ãŒå³åº§ã«ãã£ãŒãããã¯ãæäŸããŠãããå人ã§ãã ããããå®è·µçã«ã¯ã1ã€ã®ã¹ã¬ããã§500ã®CPUããŠã³ãã®ãŠããããã¹ããå®è¡ããã«ã¯æéãããããããŸãã 幞ããªããšã«ãææ°ã®ãã¹ãã©ã³ããŒãCIãã©ãããã©ãŒã ïŒJest ã AVA ãMocha extensions ã®ãããªïŒã§ã¯ããã¹ããè€æ°ã®ããã»ã¹ã«äžŠååãããã£ãŒãããã¯ãŸã§ã®æéãå€§å¹ ã«æ¹åããããšãã§ããŸããCIãã³ããŒã®äžã«ã¯ããã¹ããã³ã³ããéïŒïŒïŒã§äžŠååãããã®ããããããã«ãããã£ãŒãããã¯ã«ãŒããããã«ççž®ãããŸããããŒã«ã«ã§è€æ°ã®ããã»ã¹ã䜿çšããŠããã¯ã©ãŠãã®CLIã§è€æ°ã®ãã·ã³ã䜿çšããŠãããããããç°ãªãããã»ã¹ã§å®è¡ãããå¯èœæ§ãããããã䞊ååã«ãã£ãŠãã¹ããèªåŸçã«ç¶æããå¿ èŠããããŸãã
â ãããªãã°: æ°ããã³ãŒããããã·ã¥ããŠãã1æéåŸã«ãã¹ãã®çµæãåºãã®ã§ã¯ããã®é ã«ã¯æ¢ã«æ¬¡ã®æ©èœã®ã³ãŒãã£ã³ã°ãããŠããã§ãããããããã¹ãã®å¹æãåæžãããŠããŸããŸãã
â ã³ãŒãäŸ
ð æ£ããäŸ: ãã¹ãã®äžŠååã«ãããMocha parallelãšJestã¯åŸæ¥ã®Mochaãç°¡åã«åé§ããŸãã (åºå ž: JavaScript Test-Runners Benchmark)
â ããããŸããã: ã©ã€ã»ã³ã¹ãççšã®åé¡ã¯ãããããä»ã¯äž»ãªé¢å¿äºã§ã¯ãªãã§ããããã10åã§ãã®é ç®ãæºããããšãããã©ãã§ãããïŒ license check ã plagiarism check ïŒåçšå©çšå¯èœãªç¡æãã©ã³ïŒãªã©ã®npmããã±ãŒãžã¯ãCIãã€ãã©ã€ã³ã«ç°¡åã«çµã¿èŸŒãããšãã§ããå¶éä»ãã©ã€ã»ã³ã¹ã®äŸåé¢ä¿ããStack Overflowããã³ããŒããŒã¹ããããã³ãŒããªã©ãæããã«èäœæš©ã«éåããŠããã³ãŒããæ€æ»ããããšãã§ããŸãã
â ãããªãã°: æå³ããã«äžé©åãªã©ã€ã»ã³ã¹ã®ããã±ãŒãžã䜿çšããããåçšã³ãŒããã³ããŒããŒã¹ããããããŠãæ³çãªåé¡ãçºçããå¯èœæ§ããããŸãã
â ã³ãŒãäŸ
// license-checker ãããŒã«ã«åã¯CIç°å¢ã«ã€ã³ã¹ããŒã«ããŠãã ãã
npm install -g license-checker
// ãã¹ãŠã®ã©ã€ã»ã³ã¹ãã¹ãã£ã³ããæªæ¿èªã®ã©ã€ã»ã³ã¹ãèŠã€ããå Žåã¯0以å€ã®çµäºã³ãŒãã§å€±æããããã«ããŸããCIç°å¢ã§ã¯ããã®å€±æããã£ããããŠããã«ããåæ¢ããå¿
èŠããããŸãã
license-checker --summary --failOn BSD
â ããããŸããã: Expressãªã©ã®æãä¿¡é Œã§ããäŸåé¢ä¿ã§ãã£ãŠããæ¢ç¥ã®è匱æ§ããããŸããããã¯ãnpm audit ã®ãããªã³ãã¥ããã£ããŒã«ããsnyk ïŒç¡æã®ã³ãã¥ããã£ããŒãžã§ã³ããããŸãïŒã®ãããªåçšããŒã«ã䜿ãã°ãç°¡åã«è§£æ±ºã§ããŸãããããã®ããŒã«ã¯ããã«ãã®ãã³ã«CIããèµ·åããããšãã§ããŸãã
â ãããªãã°: å°çšã®ããŒã«ã䜿ããã«ã³ãŒããè匱æ§ããå®ãã«ã¯ãæ°ããªè åšã«é¢ãããªã³ã©ã€ã³ã®æ å ±ãåžžã«ãã§ãã¯ããå¿ èŠããããŸããéåžžã«é¢åã§ãã
â ããããŸããã: Yarnãšnpmã®package-lock.jsonã®å°å ¥ã¯ãæ·±å»ãªèª²é¡ããããããŸããïŒå°çãžã®éã¯åæã§æ·ãããŠããŸãïŒ - æšæºã§ã¯ãããã±ãŒãžã¯ãã¯ãæŽæ°ãããŸãããânpm installâ ãš ânpm updateâ ã§äœåºŠããããã€ãç¹°ãè¿ãããŒã ã§ããæ°ããã¢ããããŒãã¯åŸãããŸããããã®çµæãäŸåããã±ãŒãžã®ããŒãžã§ã³ã¯è¯ããŠãæšæºä»¥äžã«ãªããææªã®å Žåã¯è匱ãªã³ãŒãã«ãªããŸããçŸåšãããŒã ã¯æåã§package.jsonãæŽæ°ããããã«éçºè ã®åæãšèšæ¶åã«é Œã£ãŠããããncu ã®ãããªããŒã«ãæåã§äœ¿çšããŠããŸãã ãã確å®ãªæ¹æ³ã¯ãæãä¿¡é Œæ§ã®é«ãäŸåé¢ä¿ã®ããŒãžã§ã³ãååŸããããã»ã¹ãèªååããããšã§ããããŸã éã®åŒŸäžžã®ãããªè§£æ±ºçã¯ãããŸããããã ãå¯èœæ§ã®ããèªååã®éã¯2ã€ãããŸã:
(1) CIã§ãânpm outdatedâ ãânpm-check-updates (ncu)âãªã©ã®ããŒã«ã䜿ã£ãŠãå€ãäŸåé¢ä¿ãæã€ãã«ãã倱æãããŸããããã«ãããéçºè ã«äŸåé¢ä¿ã®æŽæ°ã匷å¶ããããšãã§ããŸãã
(2) ã³ãŒããã¹ãã£ã³ããŠãäŸåé¢ä¿ãæŽæ°ãããã«ãªã¯ãšã¹ããèªåçã«äœæããåçšããŒã«ã䜿çšããŸããæ®ãäžã€ã®èå³æ·±ãåé¡ã¯ãäŸåé¢ä¿ã®æŽæ°ããªã·ãŒãã©ãããããšããããšã§ãã- ãããããšã«æŽæ°ãããšãªãŒããŒãããã倧ãããªããããŸãããã¡ãžã£ãŒãªãªãŒã¹çŽåŸã«æŽæ°ãããšäžå®å®ãªããŒãžã§ã³ã«ãªã£ãŠããŸãå¯èœæ§ãããã§ãããïŒå€ãã®ããã±ãŒãžããªãªãŒã¹åŸæ°æ¥ã§è匱æ§ãçºèŠãããŠããŸããeslint-scopeã®ã€ã³ã·ãã³ã ãã¿ãŠãã ããïŒã
å¹ççãªã¢ããããŒãããªã·ãŒã§ã¯ãããã€ãã®ãæš©å©ç¢ºå®æéããèšããããšãã§ããŸã - ããŒã«ã«ãå€ããªã£ããšå€æããåã«ãã³ãŒãã@latestããããã°ããé
ããããŒãžã§ã³ã«ãªãããã«ããŸãïŒäŸïŒããŒã«ã«ããŒãžã§ã³ã¯1.3.1ããªããžããªããŒãžã§ã³ã¯1.3.8ïŒã
â ãããªãã°: äœæè ã«ãã£ãŠãªã¹ã¯ããããšæ瀺çã«ã¿ã°ä»ããããããã±ãŒãžããããã¯ã·ã§ã³ã§å®è¡ãããŸãã
â ã³ãŒãäŸ
ð äŸ: ncu ã¯æåãŸãã¯CIãã€ãã©ã€ã³äžã§ãã³ãŒããã©ã®çšåºŠææ°ããŒãžã§ã³ããé ããŠããããæ€åºããããã«äœ¿çšã§ããŸãã
â ããããŸããã: ãã®èšäºã¯ãNode JSã«é¢é£ããããå°ãªããšãNode JSã§äŸç€ºã§ãããã¹ãã®ã¢ããã€ã¹ã«çŠç¹ãåœãŠãŠããŸããã§ãããã®é ç®ã§ã¯ãNodeã«é¢é£ããªãããã©ããç¥ãããŠããCIã®Tipsãããã€ããŸãšããŠçŽ¹ä»ããŸãã
- 宣èšåã®æ§æã䜿çšãããã»ãšãã©ã®ãã³ããŒã§ã¯ãããå¯äžã®éžæè¢ã§ãããJenkinsã®å€ãããŒãžã§ã³ã§ã¯ãã³ãŒããUIã䜿çšããããšãã§ããŸãã
- Dockerã«ãã€ãã£ãã§å¯Ÿå¿ããŠãããã³ããŒãéžã¶ã
- æ©æã«å€±æããæéã®ãã¹ããæåã«å®è¡ãããè€æ°ã®é«éãªæ€æ»ïŒäŸïŒãªã³ãã£ã³ã°ããŠããããã¹ãïŒããŸãšãããã¹ã¢ãŒã¯ãã¹ããã®ã¹ããã/ãã€ã«ã¹ããŒã³ãäœæããã³ãŒãã³ããã¿ãŒã«è¿ éãªãã£ãŒãããã¯ãæäŸããŸãããã
- ãã¹ãã¬ããŒããã«ãã¬ããžã¬ããŒãããã¥ãŒããŒã·ã§ã³ã¬ããŒãããã°ãªã©ããã¹ãŠã®ãã«ãææç©ã«ç°¡åã«ç®ãéãããšãã§ããã
- ã€ãã³ãããšã«è€æ°ã®ãã€ãã©ã€ã³/ãžã§ããäœæãããããã®éã§ã¹ããããåå©çšãããäŸãã°ããã£ãŒãã£ãŒãã©ã³ãã®ã³ãããçšã®ãžã§ããšããã¹ã¿ãŒãã©ã³ãã®PRçšã®ãžã§ãã«ã¯å¥ã®ãžã§ããèšå®ããŸãããããããå ±æã¹ãããã䜿ã£ãŠããžãã¯ãåå©çšã§ããããã«ããŸãïŒã»ãšãã©ã®ãã³ããŒãã³ãŒãåå©çšã®ããã®äœããã®ã¡ã«ããºã ãæäŸããŠããŸãïŒã
- ãžã§ã宣èšã«æ©å¯æ å ±ãåã蟌ãŸãªããç¹å®ã®ä¿åå Žæããžã§ãã®èšå®ããæ©å¯æ å ±ãååŸããããã«ããŠãã ããã
- ãªãªãŒã¹ãã«ãã§æ瀺çã«ããŒãžã§ã³ãäžããããå°ãªããšãéçºè ãè¡ã£ãããšãä¿èšŒããã
- äžåºŠã ããã«ãããåäžã®ãã«ãææç©ïŒäŸïŒDocker ImageïŒã«å¯ŸããŠãã¹ãŠã®æ€æ»ãå®è¡ãã
- ãã«ãéã§ç¶æ ã移åããªãçåœãªç°å¢ã§ãã¹ããè¡ããnode_modulesã®ãã£ãã·ã¥ã¯å¯äžã®äŸå€ãããããŸããã
â ãããªãã°: é·å¹Žã®ç¥æµã倱ãããšã«ãªãã§ããã
âª ïž 5.9 ãã«ããããªãã¯ã¹: è€æ°ã®NodeããŒãžã§ã³ã§åãCIã¹ããããå®è¡ãã
â
ããããŸããã: å質ãã§ãã¯ã¯å¶ç¶ã®çºèŠã§ãããã«ããŒããç¯å²ãåºããã°åºãã»ã©ãåé¡ãæ©æã«çºèŠããããšãã§ããŸããåå©çšå¯èœãªããã±ãŒãžãéçºããããæ§ã
ãªæ§æãNodeã®ããŒãžã§ã³ãæã€è€æ°ã®é¡§å®¢ã®è£œåãéçºããå ŽåãCIã¯ãã¹ãŠã®æ§æã®çµã¿åããã«å¯ŸããŠãã¹ãã®ãã€ãã©ã€ã³ãå®è¡ããå¿
èŠããããŸãã äŸãã°ããã顧客ã«ã¯MySQLã䜿çšããä»ã®é¡§å®¢ã«ã¯Postgresã䜿çšããå Žåãããã€ãã®CIãã³ããŒã¯ããããªãã¯ã¹ããšåŒã°ããæ©èœããµããŒãããŠãããMySQLãPostgresããããŠNodeããŒãžã§ã³8ã9ã10ã®ãããªè€æ°ã®ãã¹ãŠã®çµã¿åããã«å¯ŸããŠãã¹ãã¹ã€ãŒããå®è¡ããããšãã§ããŸããããã¯èšå®ã®ã¿ã§è¡ãããè¿œå ã®æéã¯ããããŸããïŒãã¹ããŸãã¯ãã®ä»ã®å質ãã§ãã¯ãæ¢ã«ããããšãåæãšããŠããŸãïŒããããªãã¯ã¹ããµããŒãããŠããªãä»ã®CIã§ã¯ãæ¡åŒµæ©èœã調æŽæ©èœã§å¯Ÿå¿ããŠãããããããŸããã
â ãããªãã°: ãã¹ããæžããšãã倧å€ãªäœæ¥ããã¹ãŠçµããåŸã«ãèšå®ã®åé¡ã ãã§ãã°ãçŽã蟌ãã®ãèš±ãã®ã§ããããïŒ
â ã³ãŒãäŸ
ð äŸ: TravisïŒCIãã³ããŒïŒã®ãã«ãå®çŸ©ã䜿ã£ãŠãåããã¹ããè€æ°ã®NodeããŒãžã§ã³ã§å®è¡ãã
language: node_js
node_js:
- "7"
- "6"
- "5"
- "4"
install:
- npm install
script:
- npm run test
Role: Writer
About: I'm an independent consultant who works with Fortune 500 companies and garage startups on polishing their JS & Node.js applications. More than any other topic I'm fascinated by and aims to master the art of testing. I'm also the author of Node.js Best Practices
ð Online Course: Liked this guide and wish to take your testing skills to the extreme? Consider visiting my comprehensive course Testing Node.js & JavaScript From A To Z
Follow:
Role: Tech reviewer and advisor
Took care to revise, improve, lint and polish all the texts
About: full-stack web engineer, Node.js & GraphQL enthusiast
Role: Concept, design and great advice
About: A savvy frontend developer, CSS expert and emojis freak
Role: Helps keep this project running, and reviews security related practices
About: Loves working on Node.js projects and web application security.
Thanks goes to these wonderful people who have contributed to this repository!